WorldWideScience

Sample records for document image analysis

  1. Document image analysis: A primer

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    (1) Typical documents in today's office are computer-generated, but even so, inevitably by different computers and ... different sizes, from a business card to a large engineering drawing. Document analysis ... Whether global or adaptive ...

  2. Ancient administrative handwritten documents: X-ray analysis and imaging

    International Nuclear Information System (INIS)

    Albertin, F.; Astolfo, A.; Stampanoni, M.; Peccenini, Eva; Hwu, Y.; Kaplan, F.; Margaritondo, G.

    2015-01-01

    The heavy-element content of ink in ancient administrative documents makes it possible to detect the characters with different synchrotron imaging techniques, based on attenuation or refraction. This is the first step in the direction of non-interactive virtual X-ray reading. Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page ‘reading’. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project

  3. Ancient administrative handwritten documents: X-ray analysis and imaging

    Energy Technology Data Exchange (ETDEWEB)

    Albertin, F., E-mail: fauzia.albertin@epfl.ch [Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Astolfo, A. [Paul Scherrer Institut (PSI), Villigen (Switzerland); Stampanoni, M. [Paul Scherrer Institut (PSI), Villigen (Switzerland); ETHZ, Zürich (Switzerland); Peccenini, Eva [University of Ferrara (Italy); Technopole of Ferrara (Italy); Hwu, Y. [Academia Sinica, Taipei, Taiwan (China); Kaplan, F. [Ecole Polytechnique Fédérale de Lausanne (EPFL) (Switzerland); Margaritondo, G. [Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)

    2015-01-30

    The heavy-element content of ink in ancient administrative documents makes it possible to detect the characters with different synchrotron imaging techniques, based on attenuation or refraction. This is the first step in the direction of non-interactive virtual X-ray reading. Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page ‘reading’. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project.

  4. Comparison of approaches for mobile document image analysis using server supported smartphones

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  5. Representation and traversal of documentation space. Data analysis, neuron networks and image banks

    International Nuclear Information System (INIS)

    Lelu, A.; Rosenblatt, D.

    1986-01-01

    Improvements in the visual representation of considerable amounts of data for the user is necessary for progress in documentation systems. We review practical implementations in this area, which additionally integrate concepts arising from data analysis in the most general sense. The relationship between data analysis and neuron networks is then established. Following a description of simulation experiments, we finally present software for outputting and traversing image banks which integrate most of the concept developed in this article [fr

  6. Page Layout Analysis of the Document Image Based on the Region Classification in a Decision Hierarchical Structure

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2010-10-01

    Full Text Available The conversion of document image to its electronic version is a very important problem in the saving, searching and retrieval application in the official automation system. For this purpose, analysis of the document image is necessary. In this paper, a hierarchical classification structure based on a two-stage segmentation algorithm is proposed. In this structure, image is segmented using the proposed two-stage segmentation algorithm. Then, the type of the image regions such as document and non-document image is determined using multiple classifiers in the hierarchical classification structure. The proposed segmentation algorithm uses two algorithms based on wavelet transform and thresholding. Texture features such as correlation, homogeneity and entropy that extracted from co-occurrenc matrix and also two new features based on wavelet transform are used to classifiy and lable the regions of the image. The hierarchical classifier is consisted of two Multilayer Perceptron (MLP classifiers and a Support Vector Machine (SVM classifier. The proposed algorithm is evaluated on a database consisting of document and non-document images that provides from Internet. The experimental results show the efficiency of the proposed approach in the region segmentation and classification. The proposed algorithm provides accuracy rate of 97.5% on classification of the regions.

  7. Document image retrieval through word shape coding.

    Science.gov (United States)

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  8. Semantic Document Image Classification Based on Valuable Text Pattern

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2011-01-01

    Full Text Available Knowledge extraction from detected document image is a complex problem in the field of information technology. This problem becomes more intricate when we know, a negligible percentage of the detected document images are valuable. In this paper, a segmentation-based classification algorithm is used to analysis the document image. In this algorithm, using a two-stage segmentation approach, regions of the image are detected, and then classified to document and non-document (pure region regions in the hierarchical classification. In this paper, a novel valuable definition is proposed to classify document image in to valuable or invaluable categories. The proposed algorithm is evaluated on a database consisting of the document and non-document image that provide from Internet. Experimental results show the efficiency of the proposed algorithm in the semantic document image classification. The proposed algorithm provides accuracy rate of 98.8% for valuable and invaluable document image classification problem.

  9. DOCUMENT IMAGE REGISTRATION FOR IMPOSED LAYER EXTRACTION

    Directory of Open Access Journals (Sweden)

    Surabhi Narayan

    2017-02-01

    Full Text Available Extraction of filled-in information from document images in the presence of template poses challenges due to geometrical distortion. Filled-in document image consists of null background, general information foreground and vital information imposed layer. Template document image consists of null background and general information foreground layer. In this paper a novel document image registration technique has been proposed to extract imposed layer from input document image. A convex polygon is constructed around the content of the input and the template image using convex hull. The vertices of the convex polygons of input and template are paired based on minimum Euclidean distance. Each vertex of the input convex polygon is subjected to transformation for the permutable combinations of rotation and scaling. Translation is handled by tight crop. For every transformation of the input vertices, Minimum Hausdorff distance (MHD is computed. Minimum Hausdorff distance identifies the rotation and scaling values by which the input image should be transformed to align it to the template. Since transformation is an estimation process, the components in the input image do not overlay exactly on the components in the template, therefore connected component technique is applied to extract contour boxes at word level to identify partially overlapping components. Geometrical features such as density, area and degree of overlapping are extracted and compared between partially overlapping components to identify and eliminate components common to input image and template image. The residue constitutes imposed layer. Experimental results indicate the efficacy of the proposed model with computational complexity. Experiment has been conducted on variety of filled-in forms, applications and bank cheques. Data sets have been generated as test sets for comparative analysis.

  10. Heterogeneity of shale documented by micro-FTIR and image analysis.

    Science.gov (United States)

    Chen, Yanyan; Mastalerz, Maria; Schimmelmann, Arndt

    2014-12-01

    In this study, four New Albany Shale Devonian and Mississippian samples, with vitrinite reflectance [Ro ] values ranging from 0.55% to 1.41%, were analyzed by micro-FTIR mapping of chemical and mineralogical properties. One additional postmature shale sample from the Haynesville Shale (Kimmeridgian, Ro = 3.0%) was included to test the limitation of the method for more mature substrates. Relative abundances of organic matter and mineral groups (carbonates, quartz and clays) were mapped across selected microscale regions based on characteristic infrared peaks and demonstrated to be consistent with corresponding bulk compositional percentages. Mapped distributions of organic matter provide information on the organic matter abundance and the connectivity of organic matter within the overall shale matrix. The pervasive distribution of organic matter mapped in the New Albany Shale sample MM4 is in agreement with this shale's high total organic carbon abundance relative to other samples. Mapped interconnectivity of organic matter domains in New Albany Shale samples is excellent in two early mature shale samples having Ro values from 0.55% to 0.65%, then dramatically decreases in a late mature sample having an intermediate Ro of 1.15% and finally increases again in the postmature sample, which has a Ro of 1.41%. Swanson permeabilities, derived from independent mercury intrusion capillary pressure porosimetry measurements, follow the same trend among the four New Albany Shale samples, suggesting that micro-FTIR, in combination with complementary porosimetric techniques, strengthens our understanding of porosity networks. In addition, image processing and analysis software (e.g. ImageJ) have the capability to quantify organic matter and total organic carbon - valuable parameters for highly mature rocks, because they cannot be analyzed by micro-FTIR owing to the weakness of the aliphatic carbon-hydrogen signal. © 2014 The Authors Journal of Microscopy © 2014 Royal

  11. Document image database indexing with pictorial dictionary

    Science.gov (United States)

    Akbari, Mohammad; Azimi, Reza

    2010-02-01

    In this paper we introduce a new approach for information retrieval from Persian document image database without using Optical Character Recognition (OCR).At first an attribute called subword upper contour label is defined then, a pictorial dictionary is constructed based on this attribute for the subwords. By this approach we address two issues in document image retrieval: keyword spotting and retrieval according to the document similarities. The proposed methods have been evaluated on a Persian document image database. The results have proved the ability of this approach in document image information retrieval.

  12. Robust binarization of degraded document images using heuristics

    Science.gov (United States)

    Parker, Jon; Frieder, Ophir; Frieder, Gideon

    2013-12-01

    Historically significant documents are often discovered with defects that make them difficult to read and analyze. This fact is particularly troublesome if the defects prevent software from performing an automated analysis. Image enhancement methods are used to remove or minimize document defects, improve software performance, and generally make images more legible. We describe an automated, image enhancement method that is input page independent and requires no training data. The approach applies to color or greyscale images with hand written script, typewritten text, images, and mixtures thereof. We evaluated the image enhancement method against the test images provided by the 2011 Document Image Binarization Contest (DIBCO). Our method outperforms all 2011 DIBCO entrants in terms of average F1 measure - doing so with a significantly lower variance than top contest entrants. The capability of the proposed method is also illustrated using select images from a collection of historic documents stored at Yad Vashem Holocaust Memorial in Israel.

  13. Document imaging finding niche in petroleum industry

    International Nuclear Information System (INIS)

    Cisco, S.L.

    1992-01-01

    Optical disk-based document imaging systems can reduce operating costs, save office space, and improve access to necessary information for petroleum companies that have extensive records in various formats. These imaging systems help solve document management problems to improve technical and administrative operations. Enron Gas Pipeline Group has installed a document imaging system for engineering applications to integrate records stored on paper, microfilm, or computer-aided drafting (CAD) systems. BP Exploration Inc. recently implemented a document imaging system for administrative applications. The company is evaluating an expansion of the system to include engineering and technical applications. The petroleum industry creates, acquires, distributes, and retrieves enormous amounts of data and information, which are stored on multiple media, including paper, microfilm, and electronic formats. There are two main factors responsible for the immense information storage requirements in the petroleum industry

  14. Imaging and visual documentation in medicine

    International Nuclear Information System (INIS)

    Wamsteker, K.; Jonas, U.; Veen, G. van der; Waes, P.F.G.M. van

    1987-01-01

    DOCUMED EUROPE '87 was organized to provide information to the physician on the constantly progressing developments in medical imaging technology. Leading specialists lectured on the state-of-the-art of imaging technology and visual documentation in medicine. This book presents a collection of the papers presented at the conference. refs.; figs.; tabs

  15. Document Examination: Applications of Image Processing Systems.

    Science.gov (United States)

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  16. Stamp Detection in Color Document Images

    DEFF Research Database (Denmark)

    Micenkova, Barbora; van Beusekom, Joost

    2011-01-01

    , moreover, it can be imprinted with a variable quality and rotation. Previous methods were restricted to detection of stamps of particular shapes or colors. The method presented in the paper includes segmentation of the image by color clustering and subsequent classification of candidate solutions...... by geometrical and color-related features. The approach allows for differentiation of stamps from other color objects in the document such as logos or texts. For the purpose of evaluation, a data set of 400 document images has been collected, annotated and made public. With the proposed method, recall of 83...

  17. Document image mosaicing: A novel approach

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    MS received 28 April 2003; revised 22 July 2003. Abstract. ... Hence, document image mosaicing is the process of merging split ..... Case 2: Algorithm 2 is an improved version of algorithm 1 which eliminates the drawbacks of ... One of the authors (PS) thanks the All India Council for Technical Education, New Delhi for.

  18. Indian Language Document Analysis and Understanding

    Indian Academy of Sciences (India)

    documents would contain text of more than one script (for example, English, Hindi and the ... O'Gorman and Govindaraju provides a good overview on document image ... word level in bilingual documents containing Roman and Tamil scripts.

  19. Analysis of image acquisition, post-processing and documentation in adolescents with spine injuries. Comparison before and after referral to a university hospital

    International Nuclear Information System (INIS)

    Lemburg, S.P.; Roggenland, D.; Nicolas, V.; Heyer, C.M.

    2012-01-01

    Purpose: Systematic evaluation of imaging situation and standards in acute spinal injuries of adolescents. Materials and Methods: Retrospective analysis of imaging studies of transferred adolescents with spinal injuries and survey of transferring hospitals (TH) with respect to the availability of modalities and radiological expertise and post-processing and documentation of CT studies were performed. Repetitions of imaging studies and cumulative effective dose (CED) were noted. Results: 33 of 43 patients (77 %) treated in our hospital (MA 17.2 years, 52 % male) and 25 of 32 TH (78 %) were evaluated. 24-hr availability of conventional radiography and CT was present in 96 % and 92 % of TH, whereas MRI was available in only 36 %. In 64 % of TH, imaging expertise was guaranteed by an on-staff radiologist. During off-hours radiological service was provided on an on-call basis in 56 % of TH. Neuroradiologic and pediatric radiology expertise was not available in 44 % and 60 % of TH, respectively. CT imaging including post-processing and documentation matched our standards in 36 % and 32 % of cases. The repetition rate of CT studies was 39 % (CED 116.08 mSv). Conclusion: With frequent CT repetitions, two-thirds of re-examined patients revealed a different clinical estimation of trauma severity and insufficient CT quality as possible causes for re-examination. A standardization of initial clinical evaluation and CT imaging could possibly reduce the need for repeat examinations. (orig.)

  20. Signature detection and matching for document image retrieval.

    Science.gov (United States)

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  1. New public dataset for spotting patterns in medieval document images

    Science.gov (United States)

    En, Sovann; Nicolas, Stéphane; Petitjean, Caroline; Jurie, Frédéric; Heutte, Laurent

    2017-01-01

    With advances in technology, a large part of our cultural heritage is becoming digitally available. In particular, in the field of historical document image analysis, there is now a growing need for indexing and data mining tools, thus allowing us to spot and retrieve the occurrences of an object of interest, called a pattern, in a large database of document images. Patterns may present some variability in terms of color, shape, or context, making the spotting of patterns a challenging task. Pattern spotting is a relatively new field of research, still hampered by the lack of available annotated resources. We present a new publicly available dataset named DocExplore dedicated to spotting patterns in historical document images. The dataset contains 1500 images and 1464 queries, and allows the evaluation of two tasks: image retrieval and pattern localization. A standardized benchmark protocol along with ad hoc metrics is provided for a fair comparison of the submitted approaches. We also provide some first results obtained with our baseline system on this new dataset, which show that there is room for improvement and that should encourage researchers of the document image analysis community to design new systems and submit improved results.

  2. Steganalysis Techniques for Documents and Images

    Science.gov (United States)

    2005-05-01

    steganography . We then illustrated the efficacy of our model using variations of LSB steganography . For binary images , we have made significant progress in...efforts have focused on two areas. The first area is LSB steganalysis for grayscale images . Here, as we had proposed (as a challenging task), we have...generalized our previous steganalysis technique of sample pair analysis to a theoretical framework for the detection of the LSB steganography . The new

  3. Performance evaluation methodology for historical document image binarization.

    Science.gov (United States)

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  4. Text segmentation in degraded historical document images

    Directory of Open Access Journals (Sweden)

    A.S. Kavitha

    2016-07-01

    Full Text Available Text segmentation from degraded Historical Indus script images helps Optical Character Recognizer (OCR to achieve good recognition rates for Hindus scripts; however, it is challenging due to complex background in such images. In this paper, we present a new method for segmenting text and non-text in Indus documents based on the fact that text components are less cursive compared to non-text ones. To achieve this, we propose a new combination of Sobel and Laplacian for enhancing degraded low contrast pixels. Then the proposed method generates skeletons for text components in enhanced images to reduce computational burdens, which in turn helps in studying component structures efficiently. We propose to study the cursiveness of components based on branch information to remove false text components. The proposed method introduces the nearest neighbor criterion for grouping components in the same line, which results in clusters. Furthermore, the proposed method classifies these clusters into text and non-text cluster based on characteristics of text components. We evaluate the proposed method on a large dataset containing varieties of images. The results are compared with the existing methods to show that the proposed method is effective in terms of recall and precision.

  5. An Introduction to Document Imaging in the Financial Aid Office.

    Science.gov (United States)

    Levy, Douglas A.

    2001-01-01

    First describes the components of a document imaging system in general and then addresses this technology specifically in relation to financial aid document management: its uses and benefits, considerations in choosing a document imaging system, and additional sources for information. (EV)

  6. Entre sentidos e interpretações: apontamentos sobre análise documentária de imagens / Between meanings and interpretations: notes on documentary image analysis

    Directory of Open Access Journals (Sweden)

    Vinícius Liebel

    2011-06-01

    Full Text Available O presente artigo discorre sobre algumas das possibilidades de análise qualitativa de imagens, em especial no que tange ao Método Documentário desenvolvido por Ralf Bohnsack. Baseado em uma comunicação apresentada no Deutsch-brasilianische Tagung zur qualitativen Forschung realizado em Campinas, Berlim e Göttingen, o texto concentra-se na introdução ao referido método e em seu emprego empírico, especialmente nas áreas de História e Ciência Política. Sendo o estudo de fontes imagéticas um campo ainda pouco explorado nessas duas disciplinas, o método documentário vem suprir uma carência metodológica no trato desses documentos. O seu emprego nesse sentido é demonstrado no artigo através de um exemplo de análise. A imagem selecionada – uma charge do semanário alemão Der Stürmer (1923-45 – não apenas cumpre esse objetivo como também abre novas discussões sobre a natureza da imagem e de seus produtores. A partir dessa análise, o método pode ser, de maneira abrangente, exposto e avaliado.This paper discusses some of the possibilities in qualitative analyses of pictures, especially those of the documentary method, developed by Ralf Bohnsack. The text is based on a paper presented at the German Brazilian conference on Qualitative Research, which was held at Campinas, Berlin and Gottingen. It concentrates on the presentation of the above-mentioned methodology and on its empirical uses, particularly in History and Political Science. As the study of pictorial sources is still underexplored in these two disciplines, the documentary method fills a methodological gap in the treatment of these documents. Its application is demonstrated in this paper through one exemplar analysis. The selected picture – a political cartoon from the German weekly newspaper Der Stürmer (1923-45 – not only answers this purpose but also opens new discussions about the nature of the picture and its producers. From this analysis, the method can

  7. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  8. Image analysis

    International Nuclear Information System (INIS)

    Berman, M.; Bischof, L.M.; Breen, E.J.; Peden, G.M.

    1994-01-01

    This paper provides an overview of modern image analysis techniques pertinent to materials science. The usual approach in image analysis contains two basic steps: first, the image is segmented into its constituent components (e.g. individual grains), and second, measurement and quantitative analysis are performed. Usually, the segmentation part of the process is the harder of the two. Consequently, much of the paper concentrates on this aspect, reviewing both fundamental segmentation tools (commonly found in commercial image analysis packages) and more advanced segmentation tools. There is also a review of the most widely used quantitative analysis methods for measuring the size, shape and spatial arrangements of objects. Many of the segmentation and analysis methods are demonstrated using complex real-world examples. Finally, there is a discussion of hardware and software issues. 42 refs., 17 figs

  9. Document reconstruction by layout analysis of snippets

    Science.gov (United States)

    Kleber, Florian; Diem, Markus; Sablatnig, Robert

    2010-02-01

    Document analysis is done to analyze entire forms (e.g. intelligent form analysis, table detection) or to describe the layout/structure of a document. Also skew detection of scanned documents is performed to support OCR algorithms that are sensitive to skew. In this paper document analysis is applied to snippets of torn documents to calculate features for the reconstruction. Documents can either be destroyed by the intention to make the printed content unavailable (e.g. tax fraud investigation, business crime) or due to time induced degeneration of ancient documents (e.g. bad storage conditions). Current reconstruction methods for manually torn documents deal with the shape, inpainting and texture synthesis techniques. In this paper the possibility of document analysis techniques of snippets to support the matching algorithm by considering additional features are shown. This implies a rotational analysis, a color analysis and a line detection. As a future work it is planned to extend the feature set with the paper type (blank, checked, lined), the type of the writing (handwritten vs. machine printed) and the text layout of a snippet (text size, line spacing). Preliminary results show that these pre-processing steps can be performed reliably on a real dataset consisting of 690 snippets.

  10. Spotting Separator Points at Line Terminals in Compressed Document Images for Text-line Segmentation

    OpenAIRE

    R, Amarnath; Nagabhushan, P.

    2017-01-01

    Line separators are used to segregate text-lines from one another in document image analysis. Finding the separator points at every line terminal in a document image would enable text-line segmentation. In particular, identifying the separators in handwritten text could be a thrilling exercise. Obviously it would be challenging to perform this in the compressed version of a document image and that is the proposed objective in this research. Such an effort would prevent the computational burde...

  11. Script Identification from Printed Indian Document Images and Performance Evaluation Using Different Classifiers

    OpenAIRE

    Sk Md Obaidullah; Anamika Mondal; Nibaran Das; Kaushik Roy

    2014-01-01

    Identification of script from document images is an active area of research under document image processing for a multilingual/ multiscript country like India. In this paper the real life problem of printed script identification from official Indian document images is considered and performances of different well-known classifiers are evaluated. Two important evaluating parameters, namely, AAR (average accuracy rate) and MBT (model building time), are computed for this performance analysi...

  12. A New Wavelet-Based Document Image Segmentation Scheme

    Institute of Scientific and Technical Information of China (English)

    赵健; 李道京; 俞卞章; 耿军平

    2002-01-01

    The document image segmentation is very useful for printing, faxing and data processing. An algorithm is developed for segmenting and classifying document image. Feature used for classification is based on the histogram distribution pattern of different image classes. The important attribute of the algorithm is using wavelet correlation image to enhance raw image's pattern, so the classification accuracy is improved. In this paper document image is divided into four types: background, photo, text and graph. Firstly, the document image background has been distingusished easily by former normally method; secondly, three image types will be distinguished by their typical histograms, in order to make histograms feature clearer, each resolution' s HH wavelet subimage is used to add to the raw image at their resolution. At last, the photo, text and praph have been devided according to how the feature fit to the Laplacian distrbution by -X2 and L. Simulations show that classification accuracy is significantly improved. The comparison with related shows that our algorithm provides both lower classification error rates and better visual results.

  13. Image Analysis

    DEFF Research Database (Denmark)

    The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development...... area within the four participating Nordic countries. It is a regional meeting of the International Association for Pattern Recognition (IAPR). We would like to thank all authors who submitted works to this year’s SCIA, the invited speakers, and our Program Committee. In total 67 papers were submitted....... The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries...

  14. Goal-oriented rectification of camera-based document images.

    Science.gov (United States)

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  15. RECOVERY OF DOCUMENT TEXT FROM TORN FRAGMENTS USING IMAGE PROCESSING

    OpenAIRE

    C.Prasad; Dr.Mahesh; Dr.S.A.K. Jilani

    2016-01-01

    Recovery of document from its torn or damaged fragments play an important role in the field of forensics and archival study. Reconstruction of the torn papers manually with the help of glue and tapes etc., is tedious, time consuming and not satisfactory. For torn images reconstruction we go for image mosaicing, where we reconstruct the image using features (corners) and RANSAC with homography.But for the torn fragments there is no such similarity portion between fragments. Hence we propose a ...

  16. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan

    2015-04-01

    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  17. Using Addenda in Documented Safety Analysis Reports

    International Nuclear Information System (INIS)

    Swanson, D.S.; Thieme, M.A.

    2003-01-01

    This paper discusses the use of addenda to the Radioactive Waste Management Complex (RWMC) Documented Safety Analysis (DSA) located at the Idaho National Engineering and Environmental Laboratory (INEEL). Addenda were prepared for several systems and processes at the facility that lacked adequate descriptive information and hazard analysis in the DSA. They were also prepared for several new activities involving unreviewed safety questions (USQs). Ten addenda to the RWMC DSA have been prepared since the last annual update

  18. Technical document characterization by data analysis

    International Nuclear Information System (INIS)

    Mauget, A.

    1993-05-01

    Nuclear power plants possess documents analyzing all the plant systems, which represents a vast quantity of paper. Analysis of textual data can enable a document to be classified by grouping the texts containing the same words. These methods are used on system manuals for feasibility studies. The system manual is then analyzed by LEXTER and the terms it has selected are examined. We first classify according to style (sentences containing general words, technical sentences, etc.), and then according to terms. However, it will not be possible to continue in this fashion for the 100 system manuals existing, because of lack of sufficient storage capacity. Another solution is being developed. (author)

  19. Analysis of image acquisition, post-processing and documentation in adolescents with spine injuries. Comparison before and after referral to a university hospital; Bildgebung bei wirbelsaeulenverletzten Kindern und jungen Erwachsenen. Eine Analyse von Umfeld, Standards und Wiederholungsuntersuchungen bei Patientenverlegungen

    Energy Technology Data Exchange (ETDEWEB)

    Lemburg, S.P.; Roggenland, D.; Nicolas, V.; Heyer, C.M. [Berufsgenossenschaftliches Universitaetsklinikum Bergmannshell, Bochum (Germany). Inst. fuer Diagnostische Radiologie, Interventionelle Radiologie und Nuklearmedizin

    2012-09-15

    Purpose: Systematic evaluation of imaging situation and standards in acute spinal injuries of adolescents. Materials and Methods: Retrospective analysis of imaging studies of transferred adolescents with spinal injuries and survey of transferring hospitals (TH) with respect to the availability of modalities and radiological expertise and post-processing and documentation of CT studies were performed. Repetitions of imaging studies and cumulative effective dose (CED) were noted. Results: 33 of 43 patients (77 %) treated in our hospital (MA 17.2 years, 52 % male) and 25 of 32 TH (78 %) were evaluated. 24-hr availability of conventional radiography and CT was present in 96 % and 92 % of TH, whereas MRI was available in only 36 %. In 64 % of TH, imaging expertise was guaranteed by an on-staff radiologist. During off-hours radiological service was provided on an on-call basis in 56 % of TH. Neuroradiologic and pediatric radiology expertise was not available in 44 % and 60 % of TH, respectively. CT imaging including post-processing and documentation matched our standards in 36 % and 32 % of cases. The repetition rate of CT studies was 39 % (CED 116.08 mSv). Conclusion: With frequent CT repetitions, two-thirds of re-examined patients revealed a different clinical estimation of trauma severity and insufficient CT quality as possible causes for re-examination. A standardization of initial clinical evaluation and CT imaging could possibly reduce the need for repeat examinations. (orig.)

  20. Nonlinear filtering for character recognition in low quality document images

    Science.gov (United States)

    Diaz-Escobar, Julia; Kober, Vitaly

    2014-09-01

    Optical character recognition in scanned printed documents is a well-studied task, where the captured conditions like sheet position, illumination, contrast and resolution are controlled. Nowadays, it is more practical to use mobile devices for document capture than a scanner. So as a consequence, the quality of document images is often poor owing to presence of geometric distortions, nonhomogeneous illumination, low resolution, etc. In this work we propose to use multiple adaptive nonlinear composite filters for detection and classification of characters. Computer simulation results obtained with the proposed system are presented and discussed.

  1. Document image binarization using "multi-scale" predefined filters

    Science.gov (United States)

    Saabni, Raid M.

    2018-04-01

    Reading text or searching for key words within a historical document is a very challenging task. one of the first steps of the complete task is binarization, where we separate foreground such as text, figures and drawings from the background. Successful results of this important step in many cases can determine next steps to success or failure, therefore it is very vital to the success of the complete task of reading and analyzing the content of a document image. Generally, historical documents images are of poor quality due to their storage condition and degradation over time, which mostly cause to varying contrasts, stains, dirt and seeping ink from reverse side. In this paper, we use banks of anisotropic predefined filters in different scales and orientations to develop a binarization method for degraded documents and manuscripts. Using the fact, that handwritten strokes may follow different scales and orientations, we use predefined sets of filter banks having various scales, weights, and orientations to seek a compact set of filters and weights in order to generate diffrent layers of foregrounds and background. Results of convolving these fiters on the gray level image locally, weighted and accumulated to enhance the original image. Based on the different layers, seeds of components in the gray level image and a learning process, we present an improved binarization algorithm to separate the background from layers of foreground. Different layers of foreground which may be caused by seeping ink, degradation or other factors are also separated from the real foreground in a second phase. Promising experimental results were obtained on the DIBCO2011 , DIBCO2013 and H-DIBCO2016 data sets and a collection of images taken from real historical documents.

  2. Whole mount nuclear fluorescent imaging: convenient documentation of embryo morphology.

    Science.gov (United States)

    Sandell, Lisa L; Kurosaka, Hiroshi; Trainor, Paul A

    2012-11-01

    Here, we describe a relatively inexpensive and easy method to produce high quality images that reveal fine topological details of vertebrate embryonic structures. The method relies on nuclear staining of whole mount embryos in combination with confocal microscopy or conventional wide field fluorescent microscopy. In cases where confocal microscopy is used in combination with whole mount nuclear staining, the resulting embryo images can rival the clarity and resolution of images produced by scanning electron microscopy (SEM). The fluorescent nuclear staining may be performed with a variety of cell permeable nuclear dyes, enabling the technique to be performed with multiple standard microscope/illumination or confocal/laser systems. The method may be used to document morphology of embryos of a variety of organisms, as well as individual organs and tissues. Nuclear stain imaging imposes minimal impact on embryonic specimens, enabling imaged specimens to be utilized for additional assays. Copyright © 2012 Wiley Periodicals, Inc.

  3. Lattice algebra approach to multispectral analysis of ancient documents.

    Science.gov (United States)

    Valdiviezo-N, Juan C; Urcid, Gonzalo

    2013-02-01

    This paper introduces a lattice algebra procedure that can be used for the multispectral analysis of historical documents and artworks. Assuming the presence of linearly mixed spectral pixels captured in a multispectral scene, the proposed method computes the scaled min- and max-lattice associative memories to determine the purest pixels that best represent the spectra of single pigments. The estimation of fractional proportions of pure spectra at each image pixel is used to build pigment abundance maps that can be used for subsequent restoration of damaged parts. Application examples include multispectral images acquired from the Archimedes Palimpsest and a Mexican pre-Hispanic codex.

  4. Machine printed text and handwriting identification in noisy document images.

    Science.gov (United States)

    Zheng, Yefeng; Li, Huiping; Doermann, David

    2004-03-01

    In this paper, we address the problem of the identification of text in noisy document images. We are especially focused on segmenting and identifying between handwriting and machine printed text because: 1) Handwriting in a document often indicates corrections, additions, or other supplemental information that should be treated differently from the main content and 2) the segmentation and recognition techniques requested for machine printed and handwritten text are significantly different. A novel aspect of our approach is that we treat noise as a separate class and model noise based on selected features. Trained Fisher classifiers are used to identify machine printed text and handwriting from noise and we further exploit context to refine the classification. A Markov Random Field-based (MRF) approach is used to model the geometrical structure of the printed text, handwriting, and noise to rectify misclassifications. Experimental results show that our approach is robust and can significantly improve page segmentation in noisy document collections.

  5. Method of forming latent image to protect documents based on the effect moire

    OpenAIRE

    Troyan, О.

    2015-01-01

    Analysis of modern methods of information protection based on printed documents. It is shown that methods of protection from moiré effect provide reliable and effective protection by gaining new protection technology that is displayed in the optical acceleration motion layers and causes moire in fraud. Latent images can securely protect paper documents. Introduce a system of equations to calculate curvilinear patterns, where the optical formula of acceleration and periods moire stored in i...

  6. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  7. Retinal Imaging and Image Analysis

    Science.gov (United States)

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:22275207

  8. AVIS: analysis method for document coherence

    International Nuclear Information System (INIS)

    Henry, J.Y.; Elsensohn, O.

    1994-06-01

    The present document intends to give a short insight into AVIS, a method which permits to verify the quality of technical documents. The paper includes the presentation of the applied approach based on the K.O.D. method, the definition of quality criteria of a technical document, as well as a description of the means of valuating these criteria. (authors). 9 refs., 2 figs

  9. Forensic document analysis using scanning microscopy

    Science.gov (United States)

    Shaffer, Douglas K.

    2009-05-01

    The authentication and identification of the source of a printed document(s) can be important in forensic investigations involving a wide range of fraudulent materials, including counterfeit currency, travel and identity documents, business and personal checks, money orders, prescription labels, travelers checks, medical records, financial documents and threatening correspondence. The physical and chemical characterization of document materials - including paper, writing inks and printed media - is becoming increasingly relevant for law enforcement agencies, with the availability of a wide variety of sophisticated commercial printers and copiers which are capable of producing fraudulent documents of extremely high print quality, rendering these difficult to distinguish from genuine documents. This paper describes various applications and analytical methodologies using scanning electron miscoscopy/energy dispersive (x-ray) spectroscopy (SEM/EDS) and related technologies for the characterization of fraudulent documents, and illustrates how their morphological and chemical profiles can be compared to (1) authenticate and (2) link forensic documents with a common source(s) in their production history.

  10. Script Identification from Printed Indian Document Images and Performance Evaluation Using Different Classifiers

    Directory of Open Access Journals (Sweden)

    Sk Md Obaidullah

    2014-01-01

    multiscript country like India. In this paper the real life problem of printed script identification from official Indian document images is considered and performances of different well-known classifiers are evaluated. Two important evaluating parameters, namely, AAR (average accuracy rate and MBT (model building time, are computed for this performance analysis. Experiment was carried out on 459 printed document images with 5-fold cross-validation. Simple Logistic model shows highest AAR of 98.9% among all. BayesNet and Random Forest model have average accuracy rate of 96.7% and 98.2% correspondingly with lowest MBT of 0.09 s.

  11. Rapid Exploitation and Analysis of Documents

    Energy Technology Data Exchange (ETDEWEB)

    Buttler, D J; Andrzejewski, D; Stevens, K D; Anastasiu, D; Gao, B

    2011-11-28

    Analysts are overwhelmed with information. They have large archives of historical data, both structured and unstructured, and continuous streams of relevant messages and documents that they need to match to current tasks, digest, and incorporate into their analysis. The purpose of the READ project is to develop technologies to make it easier to catalog, classify, and locate relevant information. We approached this task from multiple angles. First, we tackle the issue of processing large quantities of information in reasonable time. Second, we provide mechanisms that allow users to customize their queries based on latent topics exposed from corpus statistics. Third, we assist users in organizing query results, adding localized expert structure over results. Forth, we use word sense disambiguation techniques to increase the precision of matching user generated keyword lists with terms and concepts in the corpus. Fifth, we enhance co-occurrence statistics with latent topic attribution, to aid entity relationship discovery. Finally we quantitatively analyze the quality of three popular latent modeling techniques to examine under which circumstances each is useful.

  12. Document authentication at molecular levels using desorption atmospheric pressure chemical ionization mass spectrometry imaging.

    Science.gov (United States)

    Li, Ming; Jia, Bin; Ding, Liying; Hong, Feng; Ouyang, Yongzhong; Chen, Rui; Zhou, Shumin; Chen, Huanwen; Fang, Xiang

    2013-09-01

    Molecular images of documents were obtained by sequentially scanning the surface of the document using desorption atmospheric pressure chemical ionization mass spectrometry (DAPCI-MS), which was operated in either a gasless, solvent-free or methanol vapor-assisted mode. The decay process of the ink used for handwriting was monitored by following the signal intensities recorded by DAPCI-MS. Handwritings made using four types of inks on four kinds of paper surfaces were tested. By studying the dynamic decay of the inks, DAPCI-MS imaging differentiated a 10-min old from two 4 h old samples. Non-destructive forensic analysis of forged signatures either handwritten or computer-assisted was achieved according to the difference of the contour in DAPCI images, which was attributed to the strength personalized by different writers. Distinction of the order of writing/stamping on documents and detection of illegal printings were accomplished with a spatial resolution of about 140 µm. A Matlab® written program was developed to facilitate the visualization of the similarity between signature images obtained by DAPCI-MS. The experimental results show that DAPCI-MS imaging provides rich information at the molecular level and thus can be used for the reliable document analysis in forensic applications. © 2013 The Authors. Journal of Mass Spectrometry published by John Wiley & Sons, Ltd.

  13. Image-Based 3d Reconstruction Data as AN Analysis and Documentation Tool for Architects: the Case of Plaka Bridge in Greece

    Science.gov (United States)

    Kouimtzoglou, T.; Stathopoulou, E. K.; Agrafiotis, P.; Georgopoulos, A.

    2017-02-01

    Μodern advances in the field of image-based 3D reconstruction of complex architectures are valuable tools that may offer the researchers great possibilities integrating the use of such procedures in their studies. In the same way that photogrammetry was a well-known useful tool among the cultural heritage community for years, the state of the art reconstruction techniques generate complete and easy to use 3D data, thus enabling engineers, architects and other cultural heritage experts to approach their case studies in an exhaustive and efficient way. The generated data can be a valuable and accurate basis upon which further plans and studies will be drafted. These and other aspects of the use of image-based 3D data for architectural studies are to be presented and analysed in this paper, based on the experience gained from a specific case study, the Plaka Bridge. This historic structure is of particular interest, as it was recently lost due to extreme weather conditions and serves as a strong proof that preventive actions are of utmost importance in order to preserve our common past.

  14. Retinal imaging and image analysis

    NARCIS (Netherlands)

    Abramoff, M.D.; Garvin, Mona K.; Sonka, Milan

    2010-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of

  15. Canister storage building design basis accident analysis documentation

    International Nuclear Information System (INIS)

    KOPELIC, S.D.

    1999-01-01

    This document provides the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report

  16. Canister storage building design basis accident analysis documentation

    Energy Technology Data Exchange (ETDEWEB)

    KOPELIC, S.D.

    1999-02-25

    This document provides the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.

  17. Canister Storage Building (CSB) Design Basis Accident Analysis Documentation

    International Nuclear Information System (INIS)

    CROWE, R.D.

    1999-01-01

    This document provides the detailed accident analysis to support ''HNF-3553, Spent Nuclear Fuel Project Final Safety, Analysis Report, Annex A,'' ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report

  18. Canister Storage Building (CSB) Design Basis Accident Analysis Documentation

    International Nuclear Information System (INIS)

    CROWE, R.D.; PIEPHO, M.G.

    2000-01-01

    This document provided the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report''. All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report

  19. Cold Vacuum Drying (CVD) Facility Design Basis Accident Analysis Documentation

    Energy Technology Data Exchange (ETDEWEB)

    PIEPHO, M.G.

    1999-10-20

    This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report, ''Cold Vacuum Drying Facility Final Safety Analysis Report (FSAR).'' All assumptions, parameters and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR.

  20. Cold Vacuum Drying Facility Design Basis Accident Analysis Documentation

    International Nuclear Information System (INIS)

    PIEPHO, M.G.

    1999-01-01

    This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report, ''Cold Vacuum Drying Facility Final Safety Analysis Report (FSAR).'' All assumptions, parameters and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR

  1. Cultural diversity: blind spot in medical curriculum documents, a document analysis.

    Science.gov (United States)

    Paternotte, Emma; Fokkema, Joanne P I; van Loon, Karsten A; van Dulmen, Sandra; Scheele, Fedde

    2014-08-22

    Cultural diversity among patients presents specific challenges to physicians. Therefore, cultural diversity training is needed in medical education. In cases where strategic curriculum documents form the basis of medical training it is expected that the topic of cultural diversity is included in these documents, especially if these have been recently updated. The aim of this study was to assess the current formal status of cultural diversity training in the Netherlands, which is a multi-ethnic country with recently updated medical curriculum documents. In February and March 2013, a document analysis was performed of strategic curriculum documents for undergraduate and postgraduate medical education in the Netherlands. All text phrases that referred to cultural diversity were extracted from these documents. Subsequently, these phrases were sorted into objectives, training methods or evaluation tools to assess how they contributed to adequate curriculum design. Of a total of 52 documents, 33 documents contained phrases with information about cultural diversity training. Cultural diversity aspects were more prominently described in the curriculum documents for undergraduate education than in those for postgraduate education. The most specific information about cultural diversity was found in the blueprint for undergraduate medical education. In the postgraduate curriculum documents, attention to cultural diversity differed among specialties and was mainly superficial. Cultural diversity is an underrepresented topic in the Dutch documents that form the basis for actual medical training, although the documents have been updated recently. Attention to the topic is thus unwarranted. This situation does not fit the demand of a multi-ethnic society for doctors with cultural diversity competences. Multi-ethnic countries should be critical on the content of the bases for their medical educational curricula.

  2. A SURVEY ON DOCUMENT CLUSTERING APPROACH FOR COMPUTER FORENSIC ANALYSIS

    OpenAIRE

    Monika Raghuvanshi*, Rahul Patel

    2016-01-01

    In a forensic analysis, large numbers of files are examined. Much of the information comprises of in unstructured format, so it’s quite difficult task for computer forensic to perform such analysis. That’s why to do the forensic analysis of document within a limited period of time require a special approach such as document clustering. This paper review different document clustering algorithms methodologies for example K-mean, K-medoid, single link, complete link, average link in accorandance...

  3. Canister Storage Building (CSB) Design Basis Accident Analysis Documentation

    Energy Technology Data Exchange (ETDEWEB)

    CROWE, R.D.

    1999-09-09

    This document provides the detailed accident analysis to support ''HNF-3553, Spent Nuclear Fuel Project Final Safety, Analysis Report, Annex A,'' ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.

  4. Planning, Conducting, and Documenting Data Analysis for Program Improvement

    Science.gov (United States)

    Winer, Abby; Taylor, Cornelia; Derrington, Taletha; Lucas, Anne

    2015-01-01

    This 2015 document was developed to help technical assistance (TA) providers and state staff define and limit the scope of data analysis for program improvement efforts, including the State Systemic Improvement Plan (SSIP); develop a plan for data analysis; document alternative hypotheses and additional analyses as they are generated; and…

  5. Spectrum analysis on quality requirements consideration in software design documents.

    Science.gov (United States)

    Kaiya, Haruhiko; Umemura, Masahiro; Ogata, Shinpei; Kaijiri, Kenji

    2013-12-01

    Software quality requirements defined in the requirements analysis stage should be implemented in the final products, such as source codes and system deployment. To guarantee this meta-requirement, quality requirements should be considered in the intermediate stages, such as the design stage or the architectural definition stage. We propose a novel method for checking whether quality requirements are considered in the design stage. In this method, a technique called "spectrum analysis for quality requirements" is applied not only to requirements specifications but also to design documents. The technique enables us to derive the spectrum of a document, and quality requirements considerations in the document are numerically represented in the spectrum. We can thus objectively identify whether the considerations of quality requirements in a requirements document are adapted to its design document. To validate the method, we applied it to commercial software systems with the help of a supporting tool, and we confirmed that the method worked well.

  6. The role and design of screen images in software documentation.

    NARCIS (Netherlands)

    van der Meij, Hans

    2000-01-01

    Software documentation for the novice user typically must try to achieve at least three goals: to support basic knowledge and skills development; to prevent or support the handling of mistakes, and to support the joint handling of manual, input device and screen. This paper concentrates on the

  7. A Conceptual Model for Multidimensional Analysis of Documents

    Science.gov (United States)

    Ravat, Franck; Teste, Olivier; Tournier, Ronan; Zurlfluh, Gilles

    Data warehousing and OLAP are mainly used for the analysis of transactional data. Nowadays, with the evolution of Internet, and the development of semi-structured data exchange format (such as XML), it is possible to consider entire fragments of data such as documents as analysis sources. As a consequence, an adapted multidimensional analysis framework needs to be provided. In this paper, we introduce an OLAP multidimensional conceptual model without facts. This model is based on the unique concept of dimensions and is adapted for multidimensional document analysis. We also provide a set of manipulation operations.

  8. Oblique aerial images and their use in cultural heritage documentation

    DEFF Research Database (Denmark)

    Höhle, Joachim

    2013-01-01

    on automatically derived point clouds of high density. Each point will be supplemented with colour and other attributes. The problems experienced in these processes and the solutions to these problems are presented. The applied tools are a combination of professional tools, free software, and of own software...... developments. Special attention is given to the quality of input images. Investigations are carried out on edges in the images. The combination of oblique and nadir images enables new possibilities in the processing. The use of the near-infrared channel besides the red, green, and blue channel of the applied...

  9. Technical requirements document for the waste flow analysis

    International Nuclear Information System (INIS)

    Shropshire, D.E.

    1996-05-01

    Purpose of this Technical Requirements Document is to define the top level customer requirements for the Waste Flow Analysis task. These requirements, once agreed upon with DOE, will be used to flow down subsequent development requirements to the model specifications. This document is intended to be a ''living document'' which will be modified over the course of the execution of this work element. Initial concurrence with the technical functional requirements from Environmental Management (EM)-50 is needed before the work plan can be developed

  10. SNF fuel retrieval sub project safety analysis document

    International Nuclear Information System (INIS)

    BERGMANN, D.W.

    1999-01-01

    This safety analysis is for the SNF Fuel Retrieval (FRS) Sub Project. The FRS equipment will be added to K West and K East Basins to facilitate retrieval, cleaning and repackaging the spent nuclear fuel into Multi-Canister Overpack baskets. The document includes a hazard evaluation, identifies bounding accidents, documents analyses of the accidents and establishes safety class or safety significant equipment to mitigate accidents as needed

  11. SNF fuel retrieval sub project safety analysis document

    Energy Technology Data Exchange (ETDEWEB)

    BERGMANN, D.W.

    1999-02-24

    This safety analysis is for the SNF Fuel Retrieval (FRS) Sub Project. The FRS equipment will be added to K West and K East Basins to facilitate retrieval, cleaning and repackaging the spent nuclear fuel into Multi-Canister Overpack baskets. The document includes a hazard evaluation, identifies bounding accidents, documents analyses of the accidents and establishes safety class or safety significant equipment to mitigate accidents as needed.

  12. Adaptive removal of background and white space from document images using seam categorization

    Science.gov (United States)

    Fillion, Claude; Fan, Zhigang; Monga, Vishal

    2011-03-01

    Document images are obtained regularly by rasterization of document content and as scans of printed documents. Resizing via background and white space removal is often desired for better consumption of these images, whether on displays or in print. While white space and background are easy to identify in images, existing methods such as naïve removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content from document images. Document images are different from pictorial images in structure. They typically contain objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing is required, as they introduce less change in document content and style, compared with deletion of object pixels. We propose a background deletion method that exploits both local and global context. The method aims to retain the document structural information and image quality.

  13. The Role and Design of Screen Images in Software Documentation.

    Science.gov (United States)

    van der Meij, Hans

    2000-01-01

    Discussion of learning a new computer software program focuses on how to support the joint handling of a manual, input devices, and screen display. Describes a study that examined three design styles for manuals that included screen images to reduce split-attention problems and discusses theory versus practice and cognitive load theory.…

  14. Efficient document-image super-resolution using convolutional ...

    Indian Academy of Sciences (India)

    Ram Krishna Pandey

    2018-03-06

    Mar 6, 2018 ... of almost 43%, 45% and 57% on 75 dpi Tamil, English and Kannada images, respectively. Keywords. ... In our work, we have used a basic CNN with rectified linear unit (ReLU) and .... 4.3 Dataset used for the study. Since the ...

  15. Oncological image analysis.

    Science.gov (United States)

    Brady, Sir Michael; Highnam, Ralph; Irving, Benjamin; Schnabel, Julia A

    2016-10-01

    Cancer is one of the world's major healthcare challenges and, as such, an important application of medical image analysis. After a brief introduction to cancer, we summarise some of the major developments in oncological image analysis over the past 20 years, but concentrating those in the authors' laboratories, and then outline opportunities and challenges for the next decade. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Delve: A Data Set Retrieval and Document Analysis System

    KAUST Repository

    Akujuobi, Uchenna Thankgod

    2017-12-29

    Academic search engines (e.g., Google scholar or Microsoft academic) provide a medium for retrieving various information on scholarly documents. However, most of these popular scholarly search engines overlook the area of data set retrieval, which should provide information on relevant data sets used for academic research. Due to the increasing volume of publications, it has become a challenging task to locate suitable data sets on a particular research area for benchmarking or evaluations. We propose Delve, a web-based system for data set retrieval and document analysis. This system is different from other scholarly search engines as it provides a medium for both data set retrieval and real time visual exploration and analysis of data sets and documents.

  17. Improved document image segmentation algorithm using multiresolution morphology

    Science.gov (United States)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  18. A Flocking Based algorithm for Document Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Gao, Jinzhu [ORNL; Potok, Thomas E [ORNL

    2006-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.

  19. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    , it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...... of the generalities relevant for an understanding of Gabor analysis of functions on Rd. We pay special attention to the case d = 2, which is the most important case for image processing and image analysis applications. The chapter is organized as follows. Section 2 presents central tools from functional analysis......, the application of Gabor expansions to image representation is considered in Sect. 6....

  20. Investigating scientific literacy documents with linguistic network analysis

    DEFF Research Database (Denmark)

    Bruun, Jesper; Evans, Robert Harry; Dolin, Jens

    2009-01-01

    International discussions of scientific literacy (SL) are extensive and numerous sizeable documents on SL exist. Thus, comparing different conceptions of SL is methodologically challenging. We developed an analytical tool which couples the theory of complex networks with text analysis in order...

  1. Cold Vacuum Drying facility design basis accident analysis documentation

    International Nuclear Information System (INIS)

    CROWE, R.D.

    2000-01-01

    This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report (FSAR), ''Cold Vacuum Drying Facility Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR. The calculations in this document address the design basis accidents (DBAs) selected for analysis in HNF-3553, ''Spent Nuclear Fuel Project Final Safety Analysis Report'', Annex B, ''Cold Vacuum Drying Facility Final Safety Analysis Report.'' The objective is to determine the quantity of radioactive particulate available for release at any point during processing at the Cold Vacuum Drying Facility (CVDF) and to use that quantity to determine the amount of radioactive material released during the DBAs. The radioactive material released is used to determine dose consequences to receptors at four locations, and the dose consequences are compared with the appropriate evaluation guidelines and release limits to ascertain the need for preventive and mitigative controls

  2. Cold Vacuum Drying facility design basis accident analysis documentation

    Energy Technology Data Exchange (ETDEWEB)

    CROWE, R.D.

    2000-08-08

    This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report (FSAR), ''Cold Vacuum Drying Facility Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR. The calculations in this document address the design basis accidents (DBAs) selected for analysis in HNF-3553, ''Spent Nuclear Fuel Project Final Safety Analysis Report'', Annex B, ''Cold Vacuum Drying Facility Final Safety Analysis Report.'' The objective is to determine the quantity of radioactive particulate available for release at any point during processing at the Cold Vacuum Drying Facility (CVDF) and to use that quantity to determine the amount of radioactive material released during the DBAs. The radioactive material released is used to determine dose consequences to receptors at four locations, and the dose consequences are compared with the appropriate evaluation guidelines and release limits to ascertain the need for preventive and mitigative controls.

  3. Documentation and analysis for packaging limited quantity ice chests

    International Nuclear Information System (INIS)

    Nguyen, P.M.

    1995-01-01

    The purpose of this Documentation and Analysis for Packaging (DAP) is to document that ice chests meet the intent of the International Air Transport Association (IATA) and the U.S. Department of Transportation (DOT) Code of Federal Regulations as strong, tight containers for the packaging of limited quantities for transport. This DAP also outlines the packaging method used to protect the sample bottles from breakage. Because the ice chests meet the DOT requirements, they can be used to ship LTD QTY on the Hanford Site

  4. A short introduction to image analysis - Matlab exercises

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg

    2000-01-01

    This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding.......This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding....

  5. Building a Digital Library for Multibeam Data, Images and Documents

    Science.gov (United States)

    Miller, S. P.; Staudigel, H.; Koppers, A.; Johnson, C.; Cande, S.; Sandwell, D.; Peckman, U.; Becker, J. J.; Helly, J.; Zaslavsky, I.; Schottlaender, B. E.; Starr, S.; Montoya, G.

    2001-12-01

    The Scripps Institution of Oceanography, the UCSD Libraries and the San Diego Supercomputing Center have joined forces to establish a digital library for accessing a wide range of multibeam and marine geophysical data, to a community that ranges from the MGG researcher to K-12 outreach clients. This digital library collection will include 233 multibeam cruises with grids, plots, photographs, station data, technical reports, planning documents and publications, drawn from the holdings of the Geological Data Center and the SIO Archives. Inquiries will be made through an Ocean Exploration Console, reminiscent of a cockpit display where a multitude of data may be displayed individually or in two or three-dimensional projections. These displays will provide access to cruise data as well as global databases such as Global Topography, crustal age, and sediment thickness, thus meeting the day-to-day needs of researchers as well as educators, students, and the public. The prototype contains a few selected expeditions, and a review of the initial approach will be solicited from the user community during the poster session. The search process can be focused by a variety of constraints: geospatial (lat-lon box), temporal (e.g., since 1996), keyword (e.g., cruise, place name, PI, etc.), or expert-level (e.g., K-6 or researcher). The Storage Resource Broker (SRB) software from the SDSC manages the evolving collection as a series of distributed but related archives in various media, from shipboard data through processing and final archiving. The latest version of MB-System provides for the systematic creation of standard metadata, and for the harvesting of metadata from multibeam files. Automated scripts will be used to load the metadata catalog to enable queries with an Oracle database management system. These new efforts to bridge the gap between libraries and data archives are supported by the NSF Information Technology and National Science Digital Library (NSDL) programs

  6. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  7. Text extraction method for historical Tibetan document images based on block projections

    Science.gov (United States)

    Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian

    2017-11-01

    Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.

  8. [Psychoanalysis and Psychiatrie-Enquete: expert interviews and document analysis].

    Science.gov (United States)

    Söhner, Felicitas Petra; Fangerau, Heiner; Becker, Thomas

    2017-12-01

    Background The purpose of this paper is to analyse the perception of the role of psychoanalysis and psychoanalysts in the coming about of the Psychiatrie-Enquete in the Federal Republic of Germany (West Germany). Methods We performed a qualitative content analysis of expert interviews with persons involved in the Enquete (or witnessing the events as mental health professionals active at the time), a selective literature review and an analysis of documents on the Enquete process. Results Expert interviews, relevant literature and documents point to a role of psychoanalysis in the Enquete process. Psychoanalysts were considered to have been effective in the run-up to the Enquete and the work of the commission. Conclusion Psychoanalysis and a small number of psychoanalysts were perceived as being relevant in the overall process of the Psychiatrie-Enquete in West Germany. Georg Thieme Verlag KG Stuttgart · New York.

  9. Advances in oriental document analysis and recognition techniques

    CERN Document Server

    Lee, Seong-Whan

    1999-01-01

    In recent years, rapid progress has been made in computer processing of oriental languages, and the research developments in this area have resulted in tremendous changes in handwriting processing, printed oriental character recognition, document analysis and recognition, automatic input methodologies for oriental languages, etc. Advances in computer processing of oriental languages can also be seen in multimedia computing and the World Wide Web. Many of the results in those domains are presented in this book.

  10. Ns-scaled time-gated fluorescence lifetime imaging for forensic document examination

    Science.gov (United States)

    Zhong, Xin; Wang, Xinwei; Zhou, Yan

    2018-01-01

    A method of ns-scaled time-gated fluorescence lifetime imaging (TFLI) is proposed to distinguish different fluorescent substances in forensic document examination. Compared with Video Spectral Comparator (VSC) which can examine fluorescence intensity images only, TFLI can detect questioned documents like falsification or alteration. TFLI system can enhance weak signal by accumulation method. The two fluorescence intensity images of the interval delay time tg are acquired by ICCD and fitted into fluorescence lifetime image. The lifetimes of fluorescence substances are represented by different colors, which make it easy to detect the fluorescent substances and the sequence of handwritings. It proves that TFLI is a powerful tool for forensic document examination. Furthermore, the advantages of TFLI system are ns-scaled precision preservation and powerful capture capability.

  11. LOCAL BINARIZATION FOR DOCUMENT IMAGES CAPTURED BY CAMERAS WITH DECISION TREE

    Directory of Open Access Journals (Sweden)

    Naser Jawas

    2012-07-01

    Full Text Available Character recognition in a document image captured by a digital camera requires a good binary image as the input for the separation the text from the background. Global binarization method does not provide such good separation because of the problem of uneven levels of lighting in images captured by cameras. Local binarization method overcomes the problem but requires a method to partition the large image into local windows properly. In this paper, we propose a local binariation method with dynamic image partitioning using integral image and decision tree for the binarization decision. The integral image is used to estimate the number of line in the document image. The number of line in the document image is used to devide the document into local windows. The decision tree makes a decision for threshold in every local window. The result shows that the proposed method can separate the text from the background better than using global thresholding with the best OCR result of the binarized image is 99.4%. Pengenalan karakter pada sebuah dokumen citra yang diambil menggunakan kamera digital membutuhkan citra yang terbinerisasi dengan baik untuk memisahkan antara teks dengan background. Metode binarisasi global tidak memberikan hasil pemisahan yang bagus karena permasalahan tingkat pencahayaan yang tidak seimbang pada citra hasil kamera digital. Metode binarisasi lokal dapat mengatasi permasalahan tersebut namun metode tersebut membutuhkan metode untuk membagi citra ke dalam bagian-bagian window lokal. Pada paper ini diusulkan sebuah metode binarisasi lokal dengan pembagian citra secara dinamis menggunakan integral image dan decision tree untuk keputusan binarisasi lokalnya. Integral image digunakan untuk mengestimasi jumlah baris teks dalam dokumen citra. Jumlah baris tersebut kemudian digunakan untuk membagi citra dokumen ke dalam window lokal. Keputusan nilai threshold untuk setiap window lokal ditentukan dengan decisiontree. Hasilnya menunjukkan

  12. Fast words boundaries localization in text fields for low quality document images

    Science.gov (United States)

    Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry

    2018-04-01

    The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3

  13. Document co-citation analysis to enhance transdisciplinary research

    Science.gov (United States)

    Trujillo, Caleb M.; Long, Tammy M.

    2018-01-01

    Specialized and emerging fields of research infrequently cross disciplinary boundaries and would benefit from frameworks, methods, and materials informed by other fields. Document co-citation analysis, a method developed by bibliometric research, is demonstrated as a way to help identify key literature for cross-disciplinary ideas. To illustrate the method in a useful context, we mapped peer-recognized scholarship related to systems thinking. In addition, three procedures for validation of co-citation networks are proposed and implemented. This method may be useful for strategically selecting information that can build consilience about ideas and constructs that are relevant across a range of disciplines. PMID:29308433

  14. Developing Methods of praxeology to Perform Document-analysis

    DEFF Research Database (Denmark)

    Frederiksen, Jesper

    2016-01-01

    This paper provides a contribution to the methodological development on praxeologic document analysis of neoliberal welfare state policies. Different institutions related to the Danish Healthcare area, transform international health policies and these institutions produce a range of strategies...... is possible. The different works are unique but at the same time part of a common neoliberal welfare state practice. They have a structural similarity as homologous strategies related to an institutional production field of Health- and Social care service. From the construction of these strategies, it is thus...... possible to discuss more overall consequences of the neoliberal policies and the impact on nurses and their position as a health-profession....

  15. Every document and picture tells a story: using internal corporate document reviews, semiotics, and content analysis to assess tobacco advertising.

    Science.gov (United States)

    Anderson, S J; Dewhirst, T; Ling, P M

    2006-06-01

    In this article we present communication theory as a conceptual framework for conducting documents research on tobacco advertising strategies, and we discuss two methods for analysing advertisements: semiotics and content analysis. We provide concrete examples of how we have used tobacco industry documents archives and tobacco advertisement collections iteratively in our research to yield a synergistic analysis of these two complementary data sources. Tobacco promotion researchers should consider adopting these theoretical and methodological approaches.

  16. Planning Document for an NBSR Conversion Safety Analysis Report

    Energy Technology Data Exchange (ETDEWEB)

    Diamond D. J.; Baek J.; Hanson, A.L.; Cheng, L-Y.; Brown, N.; Cuadra, A.

    2013-09-25

    The NIST Center for Neutron Research (NCNR) is a reactor-laboratory complex providing the National Institute of Standards and Technology (NIST) and the nation with a world-class facility for the performance of neutron-based research. The heart of this facility is the National Bureau of Standards Reactor (NBSR). The NBSR is a heavy water moderated and cooled reactor operating at 20 MW. It is fueled with high-enriched uranium (HEU) fuel elements. A Global Threat Reduction Initiative (GTRI) program is underway to convert the reactor to low-enriched uranium (LEU) fuel. This program includes the qualification of the proposed fuel, uranium and molybdenum alloy foil clad in an aluminum alloy, and the development of the fabrication techniques. This report is a planning document for the conversion Safety Analysis Report (SAR) that would be submitted to, and approved by, the Nuclear Regulatory Commission (NRC) before the reactor could be converted.This report follows the recommended format and content from the NRC codified in NUREG-1537, “Guidelines for Preparing and Reviewing Applications for the Licensing of Non-power Reactors,” Chapter 18, “Highly Enriched to Low-Enriched Uranium Conversions.” The emphasis herein is on the SAR chapters that require significant changes as a result of conversion, primarily Chapter 4, Reactor Description, and Chapter 13, Safety Analysis. The document provides information on the proposed design for the LEU fuel elements and identifies what information is still missing. This document is intended to assist ongoing fuel development efforts, and to provide a platform for the development of the final conversion SAR. This report contributes directly to the reactor conversion pillar of the GTRI program, but also acts as a boundary condition for the fuel development and fuel fabrication pillars.

  17. Stress analysis of piping systems and piping supports. Documentation

    International Nuclear Information System (INIS)

    Rusitschka, Erwin

    1999-01-01

    The presentation is focused on the Computer Aided Tools and Methods used by Siemens/KWU in the engineering activities for Nuclear Power Plant Design and Service. In the multi-disciplinary environment, KWU has developed specific tools to support As-Built Documentation as well as Service Activities. A special application based on Close Range Photogrammetry (PHOCAS) has been developed to support revamp planning even in a high level radiation environment. It comprises three completely inter-compatible expansion modules - Photo Catalog, Photo Database and 3D-Model - to generate objects which offer progressively more utilization and analysis options. To support the outage planning of NPP/CAD-based tools have been developed. The presentation gives also an overview of the broad range of skills and references in: Plant Layout and Design using 3D-CAD-Tools; evaluation of Earthquake Safety (Seismic Screening); Revamps in Existing Plants; Inter-disciplinary coordination of project engineering and execution fields; Consulting and Assistance; Conceptual Studies; Stress Analysis of Piping Systems and Piping Supports; Documentation; Training and Supports in CAD-Design, etc. All activities are performed to the greatest extent possible using proven data-processing tools. (author)

  18. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  19. Use of Image Based Modelling for Documentation of Intricately Shaped Objects

    Science.gov (United States)

    Marčiš, M.; Barták, P.; Valaška, D.; Fraštia, M.; Trhan, O.

    2016-06-01

    In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  20. Non-Local Sparse Image Inpainting for Document Bleed-Through Removal

    Directory of Open Access Journals (Sweden)

    Muhammad Hanif

    2018-05-01

    Full Text Available Bleed-through is a frequent, pervasive degradation in ancient manuscripts, which is caused by ink seeped from the opposite side of the sheet. Bleed-through, appearing as an extra interfering text, hinders document readability and makes it difficult to decipher the information contents. Digital image restoration techniques have been successfully employed to remove or significantly reduce this distortion. This paper proposes a two-step restoration method for documents affected by bleed-through, exploiting information from the recto and verso images. First, the bleed-through pixels are identified, based on a non-stationary, linear model of the two texts overlapped in the recto-verso pair. In the second step, a dictionary learning-based sparse image inpainting technique, with non-local patch grouping, is used to reconstruct the bleed-through-contaminated image information. An overcomplete sparse dictionary is learned from the bleed-through-free image patches, which is then used to estimate a befitting fill-in for the identified bleed-through pixels. The non-local patch similarity is employed in the sparse reconstruction of each patch, to enforce the local similarity. Thanks to the intrinsic image sparsity and non-local patch similarity, the natural texture of the background is well reproduced in the bleed-through areas, and even a possible overestimation of the bleed through pixels is effectively corrected, so that the original appearance of the document is preserved. We evaluate the performance of the proposed method on the images of a popular database of ancient documents, and the results validate the performance of the proposed method compared to the state of the art.

  1. [Digitalization, archival storage and use of image documentation in the GastroBase-II system].

    Science.gov (United States)

    Kocna, P

    1997-05-14

    "GastroBase-II" is a module of the clinical information system "KIS-ComSyD"; The main part is represented by structured data-text with an expert system including on-line image digitalization in gastroenterology (incl. endoscopic, X-ray and endosonography pictures). The hardware and software of the GastroBase are described as well as six-years experiences with application of digitalized image data. An integration of a picture into text, reports, slides for a lecture or an electronic atlas is documented with examples. Briefly are reported out experiences with graphic editors (PhotoStyler), text editor (WordPerfect) and slide preparation for lecturing with the presentation software PowerPoint. The multimedia applications on the CD-ROM illustrate a modern trend using digitalized image documentation for pregradual and postgradual education.

  2. Pretest analysis document for Semiscale Test S-FS-1

    International Nuclear Information System (INIS)

    Chen, T.H.

    1985-02-01

    This report documents the pretest analysis calculation completed with the RELAP5/MOD2/CY21 code for Semiscale Test S-FS-1. The test will simulate the double-ended offset shear of the main steam line at the exit of the broken loop steam generator (downstream of the flow restrictor) and the subsequent plant recovery. The recovery portion of the test consists of a plant stabilization phase and a plant cooldown phase. The recovery procedures involve normal charging/letdown operation, pressurizer heater operation, secondary steam and feed of the unaffected steam generator, and pressurizer auxiliary spray. The test will be terminated after the unaffected steam generator and pressurizer pressures and liquid levels are stable, and the average priamry fluid temperature is stable at about 480 K (405 0 F) for at least 10 minutes

  3. Document boundary determination using structural and lexical analysis

    Science.gov (United States)

    Taghva, Kazem; Cartright, Marc-Allen

    2009-01-01

    The document boundary determination problem is the process of identifying individual documents in a stack of papers. In this paper, we report on a classification system for automation of this process. The system employs features based on document structure and lexical content. We also report on experimental results to support the effectiveness of this system.

  4. Security analysis for biometric data in ID documents

    NARCIS (Netherlands)

    Schimke, S.; Kiltz, S.; Vielhauer, C.; Kalker, A.A.C.M.

    2005-01-01

    In this paper we analyze chances and challenges with respect to the security of using biometrics in ID documents. We identify goals for ID documents, set by national and international authorities, and discuss the degree of security, which is obtainable with the inclusion of biometric into documents

  5. Simplifying documentation while approaching site closure: integrated health and safety plans as documented safety analysis

    International Nuclear Information System (INIS)

    Brown, Tulanda

    2003-01-01

    At the Fernald Closure Project (FCP) near Cincinnati, Ohio, environmental restoration activities are supported by Documented Safety Analyses (DSAs) that combine the required project-specific Health and Safety Plans, Safety Basis Requirements (SBRs), and Process Requirements (PRs) into single Integrated Health and Safety Plans (I-HASPs). By isolating any remediation activities that deal with Enriched Restricted Materials, the SBRs and PRs assure that the hazard categories of former nuclear facilities undergoing remediation remain less than Nuclear. These integrated DSAs employ Integrated Safety Management methodology in support of simplified restoration and remediation activities that, so far, have resulted in the decontamination and demolition (D and D) of over 150 structures, including six major nuclear production plants. This paper presents the FCP method for maintaining safety basis documentation, using the D and D I-HASP as an example

  6. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  7. Critical discourse analysis of social justice in nursing's foundational documents.

    Science.gov (United States)

    Valderama-Wallace, Claire P

    2017-07-01

    Social inequities threaten the health of the global population. A superficial acknowledgement of social justice by nursing's foundational documents may limit the degree to which nurses view injustice as relevant to nursing practice and education. The purpose was to examine conceptualizations of social justice and connections to broader contexts in the most recent editions. Critical discourse analysis examines and uncovers dynamics related to power, language, and inequality within the American Nurses Association's Code of Ethics, Scope and Standards of Practice, and Social Policy Statement. This analysis found ongoing inconsistencies in conceptualizations of social justice. Although the Code of Ethics integrates concepts related to social justice far more than the other two, tension between professionalism and social change emerges. The discourse of professionalism renders interrelated cultural, social, economic, historical, and political contexts nearly invisible. Greater consistency would provide a clearer path for nurses to mobilize and engage in the courageous work necessary to address social injustice. These findings also call for an examination of how nurses can critique and use the power and privilege of professionalism to amplify the connection between social institutions and health equity in nursing education, practice, and policy development. © 2017 Wiley Periodicals, Inc.

  8. GCtool for fuel cell systems design and analysis : user documentation.

    Energy Technology Data Exchange (ETDEWEB)

    Ahluwalia, R.K.; Geyer, H.K.

    1999-01-15

    GCtool is a comprehensive system design and analysis tool for fuel cell and other power systems. A user can analyze any configuration of component modules and flows under steady-state or dynamic conditions. Component models can be arbitrarily complex in modeling sophistication and new models can be added easily by the user. GCtool also treats arbitrary system constraints over part or all of the system, including the specification of nonlinear objective functions to be minimized subject to nonlinear, equality or inequality constraints. This document describes the essential features of the interpreted language and the window-based GCtool environment. The system components incorporated into GCtool include a gas flow mixer, splitier, heater, compressor, gas turbine, heat exchanger, pump, pipe, diffuser, nozzle, steam drum, feed water heater, combustor, chemical reactor, condenser, fuel cells (proton exchange membrane, solid oxide, phosphoric acid, and molten carbonate), shaft, generator, motor, and methanol steam reformer. Several examples of system analysis at various levels of complexity are presented. Also given are instructions for generating two- and three-dimensional plots of data and the details of interfacing new models to GCtool.

  9. "Cyt/Nuc," a Customizable and Documenting ImageJ Macro for Evaluation of Protein Distributions Between Cytosol and Nucleus.

    Science.gov (United States)

    Grune, Tilman; Kehm, Richard; Höhn, Annika; Jung, Tobias

    2018-05-01

    Large amounts of data from multi-channel, high resolution, fluorescence microscopic images require tools that provide easy, customizable, and reproducible high-throughput analysis. The freeware "ImageJ" has become one of the standard tools for scientific image analysis. Since ImageJ offers recording of "macros," even a complex multi-step process can be easily applied fully automated to large numbers of images, saving both time and reducing human subjective evaluation. In this work, we present "Cyt/Nuc," an ImageJ macro, able to recognize and to compare the nuclear and cytosolic areas of tissue samples, in order to investigate distributions of immunostained proteins between both compartments, while it documents in detail the whole process of evaluation and pattern recognition. As practical example, the redistribution of the 20S proteasome, the main intracellular protease in mammalian cells, is investigated in NZO-mouse liver after feeding the animals different diets. A significant shift in proteasomal distribution between cytosol and nucleus in response to metabolic stress was revealed using "Cyt/Nuc" via automatized quantification of thousands of nuclei within minutes. "Cyt/Nuc" is easy to use and highly customizable, matches the precision of careful manual evaluation and bears the potential for quick detection of any shift in intracellular protein distribution. © 2018 The Authors. Biotechnology Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  10. Documented Safety Analysis for the B695 Segment

    Energy Technology Data Exchange (ETDEWEB)

    Laycak, D

    2008-09-11

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building

  11. Documented Safety Analysis for the B695 Segment

    International Nuclear Information System (INIS)

    Laycak, D.

    2008-01-01

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., 90 Sr, 137 Cs, or 3 H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems, and keeping

  12. An alternate way for image documentation in gamma camera processing units

    International Nuclear Information System (INIS)

    Schneider, P.

    1980-01-01

    For documentation of images and curves generated by a gamma camera processing system a film exposure tool from a CT system was linked to the video monitor by use of a resistance bridge. The machine has a stock capacity of 100 plane films. For advantage there is no need for an interface, the complete information on the monitor is transferred to the plane film and compared to software controlled data output on printer or plotter the device is tremendously time saving. (orig.) [de

  13. Medical image registration for analysis

    International Nuclear Information System (INIS)

    Petrovic, V.

    2006-01-01

    Full text: Image registration techniques represent a rich family of image processing and analysis tools that aim to provide spatial correspondences across sets of medical images of similar and disparate anatomies and modalities. Image registration is a fundamental and usually the first step in medical image analysis and this paper presents a number of advanced techniques as well as demonstrates some of the advanced medical image analysis techniques they make possible. A number of both rigid and non-rigid medical image alignment algorithms of equivalent and merely consistent anatomical structures respectively are presented. The algorithms are compared in terms of their practical aims, inputs, computational complexity and level of operator (e.g. diagnostician) interaction. In particular, the focus of the methods discussion is placed on the applications and practical benefits of medical image registration. Results of medical image registration on a number of different imaging modalities and anatomies are presented demonstrating the accuracy and robustness of their application. Medical image registration is quickly becoming ubiquitous in medical imaging departments with the results of such algorithms increasingly used in complex medical image analysis and diagnostics. This paper aims to demonstrate at least part of the reason why

  14. Integrated computer-aided forensic case analysis, presentation, and documentation based on multimodal 3D data.

    Science.gov (United States)

    Bornik, Alexander; Urschler, Martin; Schmalstieg, Dieter; Bischof, Horst; Krauskopf, Astrid; Schwark, Thorsten; Scheurer, Eva; Yen, Kathrin

    2018-06-01

    Three-dimensional (3D) crime scene documentation using 3D scanners and medical imaging modalities like computed tomography (CT) and magnetic resonance imaging (MRI) are increasingly applied in forensic casework. Together with digital photography, these modalities enable comprehensive and non-invasive recording of forensically relevant information regarding injuries/pathologies inside the body and on its surface. Furthermore, it is possible to capture traces and items at crime scenes. Such digitally secured evidence has the potential to similarly increase case understanding by forensic experts and non-experts in court. Unlike photographs and 3D surface models, images from CT and MRI are not self-explanatory. Their interpretation and understanding requires radiological knowledge. Findings in tomography data must not only be revealed, but should also be jointly studied with all the 2D and 3D data available in order to clarify spatial interrelations and to optimally exploit the data at hand. This is technically challenging due to the heterogeneous data representations including volumetric data, polygonal 3D models, and images. This paper presents a novel computer-aided forensic toolbox providing tools to support the analysis, documentation, annotation, and illustration of forensic cases using heterogeneous digital data. Conjoint visualization of data from different modalities in their native form and efficient tools to visually extract and emphasize findings help experts to reveal unrecognized correlations and thereby enhance their case understanding. Moreover, the 3D case illustrations created for case analysis represent an efficient means to convey the insights gained from case analysis to forensic non-experts involved in court proceedings like jurists and laymen. The capability of the presented approach in the context of case analysis, its potential to speed up legal procedures and to ultimately enhance legal certainty is demonstrated by introducing a number of

  15. Documented Safety Analysis for the Waste Storage Facilities March 2010

    Energy Technology Data Exchange (ETDEWEB)

    Laycak, D T

    2010-03-05

    This Documented Safety Analysis (DSA) for the Waste Storage Facilities was developed in accordance with 10 CFR 830, Subpart B, 'Safety Basis Requirements,' and utilizes the methodology outlined in DOE-STD-3009-94, Change Notice 3. The Waste Storage Facilities consist of Area 625 (A625) and the Decontamination and Waste Treatment Facility (DWTF) Storage Area portion of the DWTF complex. These two areas are combined into a single DSA, as their functions as storage for radioactive and hazardous waste are essentially identical. The B695 Segment of DWTF is addressed under a separate DSA. This DSA provides a description of the Waste Storage Facilities and the operations conducted therein; identification of hazards; analyses of the hazards, including inventories, bounding releases, consequences, and conclusions; and programmatic elements that describe the current capacity for safe operations. The mission of the Waste Storage Facilities is to safely handle, store, and treat hazardous waste, transuranic (TRU) waste, low-level waste (LLW), mixed waste, combined waste, nonhazardous industrial waste, and conditionally accepted waste generated at LLNL (as well as small amounts from other DOE facilities).

  16. Documented Safety Analysis for the Waste Storage Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Laycak, D

    2008-06-16

    This documented safety analysis (DSA) for the Waste Storage Facilities was developed in accordance with 10 CFR 830, Subpart B, 'Safety Basis Requirements', and utilizes the methodology outlined in DOE-STD-3009-94, Change Notice 3. The Waste Storage Facilities consist of Area 625 (A625) and the Decontamination and Waste Treatment Facility (DWTF) Storage Area portion of the DWTF complex. These two areas are combined into a single DSA, as their functions as storage for radioactive and hazardous waste are essentially identical. The B695 Segment of DWTF is addressed under a separate DSA. This DSA provides a description of the Waste Storage Facilities and the operations conducted therein; identification of hazards; analyses of the hazards, including inventories, bounding releases, consequences, and conclusions; and programmatic elements that describe the current capacity for safe operations. The mission of the Waste Storage Facilities is to safely handle, store, and treat hazardous waste, transuranic (TRU) waste, low-level waste (LLW), mixed waste, combined waste, nonhazardous industrial waste, and conditionally accepted waste generated at LLNL (as well as small amounts from other DOE facilities).

  17. Uses of software in digital image analysis: a forensic report

    Science.gov (United States)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  18. Correcting geometric and photometric distortion of document images on a smartphone

    Science.gov (United States)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  19. Molecular imaging of banknote and questioned document using solvent-free gold nanoparticle-assisted laser desorption/ionization imaging mass spectrometry.

    Science.gov (United States)

    Tang, Ho-Wai; Wong, Melody Yee-Man; Chan, Sharon Lai-Fung; Che, Chi-Ming; Ng, Kwan-Ming

    2011-01-01

    Direct chemical analysis and molecular imaging of questioned documents in a non/minimal-destructive manner is important in forensic science. Here, we demonstrate that solvent-free gold-nanoparticle-assisted laser desorption/ionization mass spectrometry is a sensitive and minimal destructive method for direct detection and imaging of ink and visible and/or fluorescent dyes printed on banknotes or written on questioned documents. Argon ion sputtering of a gold foil allows homogeneous coating of a thin layer of gold nanoparticles on banknotes and checks in a dry state without delocalizing spatial distributions of the analytes. Upon N(2) laser irradiation of the gold nanoparticle-coated banknotes or checks, abundant ions are desorbed and detected. Recording the spatial distributions of the ions can reveal the molecular images of visible and fluorescent ink printed on banknotes and determine the printing order of different ink which may be useful in differentiating real banknotes from fakes. The method can also be applied to identify forged parts in questioned documents, such as number/writing alteration on a check, by tracing different writing patterns that come from different pens.

  20. Delve: A Data Set Retrieval and Document Analysis System

    KAUST Repository

    Akujuobi, Uchenna Thankgod; Zhang, Xiangliang

    2017-01-01

    Academic search engines (e.g., Google scholar or Microsoft academic) provide a medium for retrieving various information on scholarly documents. However, most of these popular scholarly search engines overlook the area of data set retrieval, which

  1. USE OF IMAGE BASED MODELLING FOR DOCUMENTATION OF INTRICATELY SHAPED OBJECTS

    Directory of Open Access Journals (Sweden)

    M. Marčiš

    2016-06-01

    Full Text Available In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  2. Quantitative analysis of receptor imaging

    International Nuclear Information System (INIS)

    Fu Zhanli; Wang Rongfu

    2004-01-01

    Model-based methods for quantitative analysis of receptor imaging, including kinetic, graphical and equilibrium methods, are introduced in detail. Some technical problem facing quantitative analysis of receptor imaging, such as the correction for in vivo metabolism of the tracer and the radioactivity contribution from blood volume within ROI, and the estimation of the nondisplaceable ligand concentration, is also reviewed briefly

  3. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  4. 75 FR 24718 - Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability

    Science.gov (United States)

    2010-05-05

    ...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the... on Documenting Statistical Analysis Programs and Data Files; Availability'' giving interested persons...

  5. Feasibility Study of Low-Cost Image-Based Heritage Documentation in Nepal

    Science.gov (United States)

    Dhonju, H. K.; Xiao, W.; Sarhosis, V.; Mills, J. P.; Wilkinson, S.; Wang, Z.; Thapa, L.; Panday, U. S.

    2017-02-01

    Cultural heritage structural documentation is of great importance in terms of historical preservation, tourism, educational and spiritual values. Cultural heritage across the world, and in Nepal in particular, is at risk from various natural hazards (e.g. earthquakes, flooding, rainfall etc), poor maintenance and preservation, and even human destruction. This paper evaluates the feasibility of low-cost photogrammetric modelling cultural heritage sites, and explores the practicality of using photogrammetry in Nepal. The full pipeline of 3D modelling for heritage documentation and conservation, including visualisation, reconstruction, and structure analysis, is proposed. In addition, crowdsourcing is discussed as a method of data collection of growing prominence.

  6. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processi...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....... will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology...

  7. Document Image Processing: Going beyond the Black-and-White Barrier. Progress, Issues and Options with Greyscale and Colour Image Processing.

    Science.gov (United States)

    Hendley, Tom

    1995-01-01

    Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…

  8. Hyperspectral image analysis. A tutorial

    International Nuclear Information System (INIS)

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  9. Documenting Bronze Age Akrotiri on Thera Using Laser Scanning, Image-Based Modelling and Geophysical Prospection

    Science.gov (United States)

    Trinks, I.; Wallner, M.; Kucera, M.; Verhoeven, G.; Torrejón Valdelomar, J.; Löcker, K.; Nau, E.; Sevara, C.; Aldrian, L.; Neubauer, E.; Klein, M.

    2017-02-01

    The excavated architecture of the exceptional prehistoric site of Akrotiri on the Greek island of Thera/Santorini is endangered by gradual decay, damage due to accidents, and seismic shocks, being located on an active volcano in an earthquake-prone area. Therefore, in 2013 and 2014 a digital documentation project has been conducted with support of the National Geographic Society in order to generate a detailed digital model of Akrotiri's architecture using terrestrial laser scanning and image-based modeling. Additionally, non-invasive geophysical prospection has been tested in order to investigate its potential to explore and map yet buried archaeological remains. This article describes the project and the generated results.

  10. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  11. TOF-SIMS Analysis of Red Color Inks of Writing and Printing Tools on Questioned Documents.

    Science.gov (United States)

    Lee, Jihye; Nam, Yun Sik; Min, Jisook; Lee, Kang-Bong; Lee, Yeonhee

    2016-05-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is a well-established surface technique that provides both elemental and molecular information from several monolayers of a sample surface while also allowing depth profiling or image mapping to be performed. Static TOF-SIMS with improved performances has expanded the application of TOF-SIMS to the study of a variety of organic, polymeric, biological, archaeological, and forensic materials. In forensic investigation, the use of a minimal sample for the analysis is preferable. Although the TOF-SIMS technique is destructive, the probing beams have microsized diameters so that only small portion of the questioned sample is necessary for the analysis, leaving the rest available for other analyses. In this study, TOF-SIMS and attenuated total reflectance Fourier transform infrared (ATR-FTIR) were applied to the analysis of several different pen inks, red sealing inks, and printed patterns on paper. The overlapping areas of ballpoint pen writing, red seal stamping, and laser printing in a document were investigated to identify the sequence of recording. The sequence relations for various cases were determined from the TOF-SIMS mapping image and the depth profile. TOF-SIMS images were also used to investigate numbers or characters altered with two different red pens. TOF-SIMS was successfully used to determine the sequence of intersecting lines and the forged numbers on the paper. © 2016 American Academy of Forensic Sciences.

  12. Multimodality image analysis work station

    International Nuclear Information System (INIS)

    Ratib, O.; Huang, H.K.

    1989-01-01

    The goal of this project is to design and implement a PACS (picture archiving and communication system) workstation for quantitative analysis of multimodality images. The Macintosh II personal computer was selected for its friendly user interface, its popularity among the academic and medical community, and its low cost. The Macintosh operates as a stand alone workstation where images are imported from a central PACS server through a standard Ethernet network and saved on a local magnetic or optical disk. A video digitizer board allows for direct acquisition of images from sonograms or from digitized cine angiograms. The authors have focused their project on the exploration of new means of communicating quantitative data and information through the use of an interactive and symbolic user interface. The software developed includes a variety of image analysis, algorithms for digitized angiograms, sonograms, scintigraphic images, MR images, and CT scans

  13. CONTEXT BASED FOOD IMAGE ANALYSIS

    OpenAIRE

    He, Ye; Xu, Chang; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2013-01-01

    We are developing a dietary assessment system that records daily food intake through the use of food images. Recognizing food in an image is difficult due to large visual variance with respect to eating or preparation conditions. This task becomes even more challenging when different foods have similar visual appearance. In this paper we propose to incorporate two types of contextual dietary information, food co-occurrence patterns and personalized learning models, in food image analysis to r...

  14. The cigarette pack as image: new evidence from tobacco industry documents.

    Science.gov (United States)

    Wakefield, M; Morley, C; Horan, J K; Cummings, K M

    2002-03-01

    To gain an understanding of the role of pack design in tobacco marketing. A search of tobacco company document sites using a list of specified search terms was undertaken during November 2000 to July 2001. Documents show that, especially in the context of tighter restrictions on conventional avenues for tobacco marketing, tobacco companies view cigarette packaging as an integral component of marketing strategy and a vehicle for (a) creating significant in-store presence at the point of purchase, and (b) communicating brand image. Market testing results indicate that such imagery is so strong as to influence smoker's taste ratings of the same cigarettes when packaged differently. Documents also reveal the careful balancing act that companies have employed in using pack design and colour to communicate the impression of lower tar or milder cigarettes, while preserving perceived taste and "satisfaction". Systematic and extensive research is carried out by tobacco companies to ensure that cigarette packaging appeals to selected target groups, including young adults and women. Cigarette pack design is an important communication device for cigarette brands and acts as an advertising medium. Many smokers are misled by pack design into thinking that cigarettes may be "safer". There is a need to consider regulation of cigarette packaging.

  15. A Data Analysis of Naval Air Systems Command Funding Documents

    Science.gov (United States)

    2017-06-01

    influence trend lines over time. 4. What should be the benchmarks of performance related to the purchase request process within NAVAIR program offices...Distribution is unlimited. THIS PAGE INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704–0188 Public reporting burden...able to determine that the number of line items on a purchase request grows by 39% for intergovernmental transactions when requiring amendments

  16. Army Materiel Requirements Documents: Qualitative Analysis of Efficiency and Effectiveness

    Science.gov (United States)

    2013-06-01

    focuses on the program’s time to execute their mission based off the MRDs/MCDs. We measure efficiency based on two BPP initiatives: (1) Build...definitions of each BPP initiative during our interviews. A poor rating indicates the requirement documents did not sufficiently support the program...in terms of efficiency and effectiveness according to BPP initiatives. For efficiency, we assign a qualitative measure based on SME responses across

  17. Multispectral analysis of multimodal images

    Energy Technology Data Exchange (ETDEWEB)

    Kvinnsland, Yngve; Brekke, Njaal (Dept. of Surgical Sciences, Univ. of Bergen, Bergen (Norway)); Taxt, Torfinn M.; Gruener, Renate (Dept. of Biomedicine, Univ. of Bergen, Bergen (Norway))

    2009-02-15

    An increasing number of multimodal images represent a valuable increase in available image information, but at the same time it complicates the extraction of diagnostic information across the images. Multispectral analysis (MSA) has the potential to simplify this problem substantially as unlimited number of images can be combined, and tissue properties across the images can be extracted automatically. Materials and methods. We have developed a software solution for MSA containing two algorithms for unsupervised classification, an EM-algorithm finding multinormal class descriptions and the k-means clustering algorithm, and two for supervised classification, a Bayesian classifier using multinormal class descriptions and a kNN-algorithm. The software has an efficient user interface for the creation and manipulation of class descriptions, and it has proper tools for displaying the results. Results. The software has been tested on different sets of images. One application is to segment cross-sectional images of brain tissue (T1- and T2-weighted MR images) into its main normal tissues and brain tumors. Another interesting set of images are the perfusion maps and diffusion maps, derived images from raw MR images. The software returns segmentation that seem to be sensible. Discussion. The MSA software appears to be a valuable tool for image analysis with multimodal images at hand. It readily gives a segmentation of image volumes that visually seems to be sensible. However, to really learn how to use MSA, it will be necessary to gain more insight into what tissues the different segments contain, and the upcoming work will therefore be focused on examining the tissues through for example histological sections.

  18. 3D Documentation of Archaeological Excavations Using Image-Based Point Cloud

    Directory of Open Access Journals (Sweden)

    Umut Ovalı

    2017-03-01

    Full Text Available Rapid progress in digital technology enables us to create three-dimensional models using digital images. Low cost, time efficiency and accurate results of this method put to question if this technique can be an alternative to conventional documentation techniques, which generally are 2D orthogonal drawings. Accurate and detailed 3D models of archaeological features have potential for many other purposes besides geometric documentation. This study presents a recent image-based three-dimensional registration technique employed in 2013 at one of the ancient city in Turkey, using “Structure from Motion” (SfM algorithms. A commercial software is applied to investigate whether this method can be used as an alternative to other techniques. Mesh model of the some section of the excavation section of the site were produced using point clouds were produced from the digital photographs. Accuracy assessment of the produced model was realized using the comparison of the directly measured coordinates of the ground control points with produced from model. Obtained results presented that the accuracy is around 1.3 cm.

  19. Imaging mass spectrometry statistical analysis.

    Science.gov (United States)

    Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A

    2012-08-30

    Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. UV imaging in pharmaceutical analysis

    DEFF Research Database (Denmark)

    Østergaard, Jesper

    2018-01-01

    UV imaging provides spatially and temporally resolved absorbance measurements, which are highly useful in pharmaceutical analysis. Commercial UV imaging instrumentation was originally developed as a detector for separation sciences, but the main use is in the area of in vitro dissolution...

  1. Methods in quantitative image analysis.

    Science.gov (United States)

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value

  2. Image formation and image analysis in electron microscopy

    International Nuclear Information System (INIS)

    Heel, M. van.

    1981-01-01

    This thesis covers various aspects of image formation and image analysis in electron microscopy. The imaging of relatively strong objects in partially coherent illumination, the coherence properties of thermionic emission sources and the detection of objects in quantum noise limited images are considered. IMAGIC, a fast, flexible and friendly image analysis software package is described. Intelligent averaging of molecular images is discussed. (C.F.)

  3. Image analysis enhancement and interpretation

    International Nuclear Information System (INIS)

    Glauert, A.M.

    1978-01-01

    The necessary practical and mathematical background are provided for the analysis of an electron microscope image in order to extract the maximum amount of structural information. Instrumental methods of image enhancement are described, including the use of the energy-selecting electron microscope and the scanning transmission electron microscope. The problems of image interpretation are considered with particular reference to the limitations imposed by radiation damage and specimen thickness. A brief survey is given of the methods for producing a three-dimensional structure from a series of two-dimensional projections, although emphasis is really given on the analysis, processing and interpretation of the two-dimensional projection of a structure. (Auth.)

  4. Image Analysis of Eccentric Photorefraction

    Directory of Open Access Journals (Sweden)

    J. Dušek

    2004-01-01

    Full Text Available This article deals with image and data analysis of the recorded video-sequences of strabistic infants. It describes a unique noninvasive measuring system based on two measuring methods (position of I. Purkynje image with relation to the centre of the lens and eccentric photorefraction for infants. The whole process is divided into three steps. The aim of the first step is to obtain video sequences on our special system (Eye Movement Analyser. Image analysis of the recorded sequences is performed in order to obtain curves of basic eye reactions (accommodation and convergence. The last step is to calibrate of these curves to corresponding units (diopter and degrees of movement.

  5. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim...... of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  6. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  7. Documentation of SPECTROM-55: A finite element thermohydrogeological analysis program

    International Nuclear Information System (INIS)

    Osnes, J.D.; Ratigan, J.L.; Loken, M.C.; Parrish, D.K.

    1985-12-01

    SPECTROM-55 is a finite element computer program developed by RE/SPEC Inc. for analyses of coupled heat and fluid transfer through fully saturated porous media. The theoretical basis for the mathematical model, the implementation of the mathematical model into the computer code, the verification and validation efforts with the computer code, and the code support and continuing documentation are described in this document. The program is especially suited for analyses of the regional hydrogeology in the vicinity of a heat-generating nuclear waste repository. These applications typically involve forced and free convection in a ground-water flow system. The program provides transient or steady-state temperatures, pressures, and fluid velocities resulting from the application of a variety of initial and boundary conditions to bodies with complicated shapes. The boundary conditions include constant heat and fluid fluxes, convective heat transfer, constant temperature, and constant pressure. Initial temperatures and pressures can be specified. Composite systems of anisotropic materials, such as geologic strata, can be defined in either planar or axisymmetric configurations. Constant or varying volumetric heat generation, such as decaying heat generation from radioactive waste, can be specified

  8. Wide-field time-resolved luminescence imaging and spectroscopy to decipher obliterated documents in forensic science

    Science.gov (United States)

    Suzuki, Mototsugu; Akiba, Norimitsu; Kurosawa, Kenji; Kuroki, Kenro; Akao, Yoshinori; Higashikawa, Yoshiyasu

    2016-01-01

    We applied a wide-field time-resolved luminescence (TRL) method with a pulsed laser and a gated intensified charge coupled device (ICCD) for deciphering obliterated documents for use in forensic science. The TRL method can nondestructively measure the dynamics of luminescence, including fluorescence and phosphorescence lifetimes, which prove to be useful parameters for image detection. First, we measured the TRL spectra of four brands of black porous-tip pen inks on paper to estimate their luminescence lifetimes. Next, we acquired the TRL images of 12 obliterated documents at various delay times and gate times of the ICCD. The obliterated contents were revealed in the TRL images because of the difference in the luminescence lifetimes of the inks. This method requires no pretreatment, is nondestructive, and has the advantage of wide-field imaging, which makes it is easy to control the gate timing. This demonstration proves that TRL imaging and spectroscopy are powerful tools for forensic document examination.

  9. Artificial intelligence and medical imaging. Expert systems and image analysis

    International Nuclear Information System (INIS)

    Wackenheim, A.; Zoellner, G.; Horviller, S.; Jacqmain, T.

    1987-01-01

    This paper gives an overview on the existing systems for automated image analysis and interpretation in medical imaging, especially in radiology. The example of ORFEVRE, the system for the analysis of CAT-scan images of the cervical triplet (c3-c5) by image analysis and subsequent expert-system is given and discussed in detail. Possible extensions are described [fr

  10. Image/Time Series Mining Algorithms: Applications to Developmental Biology, Document Processing and Data Streams

    Science.gov (United States)

    Tataw, Oben Moses

    2013-01-01

    Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…

  11. An analysis of electronic document management in oncology care.

    Science.gov (United States)

    Poulter, Thomas; Gannon, Brian; Bath, Peter A

    2012-06-01

    In this research in progress, a reference model for the use of electronic patient record (EPR) systems in oncology is described. The model, termed CICERO, comprises technical and functional components, and emphasises usability, clinical safety and user acceptance. One of the functional components of the model-an electronic document and records management (EDRM) system-is monitored in the course of its deployment at a leading oncology centre in the UK. Specifically, the user requirements and design of the EDRM solution are described.The study is interpretative and forms part a wider research programme to define and validate the CICERO model. Preliminary conclusions confirm the importance of a socio-technical perspective in Onco-EPR system design.

  12. Pretest analysis document for Test S-FS-7

    International Nuclear Information System (INIS)

    Hall, D.G.

    1985-06-01

    This report documents the pretest calculations completed for Semiscale Test S-FS-7. This test will simulate a transient initiated by a 14.3% break in a steam generator bottom feedwater line downstream of the check valve. The initial conditions represent normal operating conditions for a C-E System 80 nuclear power plant. Predictions of transients resulting from feedwater line breaks in these plants have indicated that significant primary system overpressurization may occur. The results of a RELAP5/MOD2/CY21 code calculation indicate that the test objectives for Test S-FS-7 can be achieved. The primary system overpressurization will occur but pose no threat to personnel or to plant integrity. 3 refs., 15 figs., 5 tabs

  13. Pretest analysis document for Test S-FS-6

    International Nuclear Information System (INIS)

    Shaw, R.A.; Hall, D.G.

    1985-05-01

    This report documents the pretest analyses completed for Semiscale Test S-FS-6. This test will simulate a transient initiated by a 100% break in a steam generator bottom feedwater line downstream of the check valve. The initial conditions represent normal operating conditions for a C-E System 80 nuclear power plant. Predictions of transients resulting from feedwater line breaks in these plants have indicated that significant primary system overpressurization may occur. The enclosed analyses include a RELAP5/MOD2/CY21 code calculation and preliminary results from a facility hot, integrated test which was conducted to near S-FS-6 specifications. The results of these analyses indicate that the test objectives for Test S-FS-6 can be achieved. The primary system overpressurization will pose no threat to personnel or plant integrity

  14. Pretest analysis document for Test S-FS-11

    International Nuclear Information System (INIS)

    Hall, D.G.; Shaw, R.A.

    1985-07-01

    This report documents the pretest calculations completed for Semiscale Test S-FS-11. This test will simulate a transient initiated by a 50% break in a steam generator bottom feedwater line downstream of the check valve. The initial conditions represents normal operating conditions for a C-E System 80 nuclear plant. Prediction of transients resulting from feedwater line breaks in these plants have indicated that significant primary system overpressurization may occur. The results of a RELAP5/MOD2/CY21 code calculation indicate that the test objectives for Test S-FS-11 can be achieved. The primary system overpressurization will occur but pose no threat to personnel or plant integrity. 3 refs., 15 figs., 5 tabs

  15. Patients' Care Needs: Documentation Analysis in General Hospitals.

    Science.gov (United States)

    Paans, Wolter; Müller-Staub, Maria

    2015-10-01

    The purpose of the study is (a) to describe care needs derived from records of patients in Dutch hospitals, and (b) to evaluate whether nurses employed the NANDA-I classification to formulate patients' care needs. A stratified cross-sectional random-sampling nursing documentation audit was conducted employing the D-Catch instrument in 10 hospitals comprising 37 wards. The most prevalent nursing diagnoses were acute pain, nausea, fatigue, and risk for impaired skin integrity. Most care needs were determined in physiological health patterns and few in psychosocial patterns. To perform effective interventions leading to high-quality nursing-sensitive outcomes, nurses should also diagnose patients' care needs in the health management, value-belief, and coping stress patterns. © 2014 NANDA International, Inc.

  16. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  17. K West integrated water treatment system subproject safety analysis document

    International Nuclear Information System (INIS)

    SEMMENS, L.S.

    1999-01-01

    This Accident Analysis evaluates unmitigated accident scenarios, and identifies Safety Significant and Safety Class structures, systems, and components for the K West Integrated Water Treatment System

  18. K West integrated water treatment system subproject safety analysis document

    Energy Technology Data Exchange (ETDEWEB)

    SEMMENS, L.S.

    1999-02-24

    This Accident Analysis evaluates unmitigated accident scenarios, and identifies Safety Significant and Safety Class structures, systems, and components for the K West Integrated Water Treatment System.

  19. Pocket pumped image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kotov, I.V., E-mail: kotov@bnl.gov [Brookhaven National Laboratory, Upton, NY 11973 (United States); O' Connor, P. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Murray, N. [Centre for Electronic Imaging, Open University, Milton Keynes, MK7 6AA (United Kingdom)

    2015-07-01

    The pocket pumping technique is used to detect small electron trap sites. These traps, if present, degrade CCD charge transfer efficiency. To reveal traps in the active area, a CCD is illuminated with a flat field and, before image is read out, accumulated charges are moved back and forth number of times in parallel direction. As charges are moved over a trap, an electron is removed from the original pocket and re-emitted in the following pocket. As process repeats one pocket gets depleted and the neighboring pocket gets excess of charges. As a result a “dipole” signal appears on the otherwise flat background level. The amplitude of the dipole signal depends on the trap pumping efficiency. This paper is focused on trap identification technique and particularly on new methods developed for this purpose. The sensor with bad segments was deliberately chosen for algorithms development and to demonstrate sensitivity and power of new methods in uncovering sensor defects.

  20. EXPOSURE ANALYSIS MODELING SYSTEM (EXAMS): USER MANUAL AND SYSTEM DOCUMENTATION

    Science.gov (United States)

    The Exposure Analysis Modeling System, first published in 1982 (EPA-600/3-82-023), provides interactive computer software for formulating aquatic ecosystem models and rapidly evaluating the fate, transport, and exposure concentrations of synthetic organic chemicals - pesticides, ...

  1. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    Science.gov (United States)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  2. IHE cross-enterprise document sharing for imaging: interoperability testing software

    Directory of Open Access Journals (Sweden)

    Renaud Bérubé

    2010-09-01

    Full Text Available Abstract Background With the deployments of Electronic Health Records (EHR, interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  3. Communications data delivery system analysis : public workshop read-ahead document.

    Science.gov (United States)

    2012-04-09

    This document presents an overview of work conducted to date around development and analysis of communications data delivery systems for : supporting transactions in the connected vehicle environment. It presents the results of technical analysis of ...

  4. Cost Analysis Sources and Documents Data Base Reference Manual (Update)

    Science.gov (United States)

    1989-06-01

    M: Refcrence Manual PRICE H: Training Course Workbook 11. Use in Cost Analysis. Important source of cost estimates for electronic and mechanical...Nature of Data. Contains many microeconomic time series by month or quarter. 5. Level of Detail. Very detailed. 6. Normalization Processes Required...Reference Manual. Moorestown, N.J,: GE Corporation, September 1986. 64. PRICE Training Course Workbook . Moorestown, N.J.: GE Corporation, February 1986

  5. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  6. Corporate Social Responsibility programs of Big Food in Australia: a content analysis of industry documents.

    Science.gov (United States)

    Richards, Zoe; Thomas, Samantha L; Randle, Melanie; Pettigrew, Simone

    2015-12-01

    To examine Corporate Social Responsibility (CSR) tactics by identifying the key characteristics of CSR strategies as described in the corporate documents of selected 'Big Food' companies. A mixed methods content analysis was used to analyse the information contained on Australian Big Food company websites. Data sources included company CSR reports and web-based content that related to CSR initiatives employed in Australia. A total of 256 CSR activities were identified across six organisations. Of these, the majority related to the categories of environment (30.5%), responsibility to consumers (25.0%) or community (19.5%). Big Food companies appear to be using CSR activities to: 1) build brand image through initiatives associated with the environment and responsibility to consumers; 2) target parents and children through community activities; and 3) align themselves with respected organisations and events in an effort to transfer their positive image attributes to their own brands. Results highlight the type of CSR strategies Big Food companies are employing. These findings serve as a guide to mapping and monitoring CSR as a specific form of marketing. © 2015 Public Health Association of Australia.

  7. Signal and image multiresolution analysis

    CERN Document Server

    Ouahabi, Abdelialil

    2012-01-01

    Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing

  8. Teaching image analysis at DIKU

    DEFF Research Database (Denmark)

    Johansen, Peter

    2010-01-01

    The early development of computer vision at Department of Computer Science at University of Copenhagen (DIKU) is briefly described. The different disciplines in computer vision are introduced, and the principles for teaching two courses, an image analysis course, and a robot lab class are outlined....

  9. Underground Test Area Subproject Phase I Data Analysis Task. Volume VII - Tritium Transport Model Documentation Package

    Energy Technology Data Exchange (ETDEWEB)

    None

    1996-12-01

    Volume VII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the tritium transport model documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  10. Underground Test Area Subproject Phase I Data Analysis Task. Volume VIII - Risk Assessment Documentation Package

    Energy Technology Data Exchange (ETDEWEB)

    None

    1996-12-01

    Volume VIII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the risk assessment documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  11. A Document Analysis of Teacher Evaluation Systems Specific to Physical Education

    Science.gov (United States)

    Norris, Jason M.; van der Mars, Hans; Kulinna, Pamela; Kwon, Jayoun; Amrein-Beardsley, Audrey

    2017-01-01

    Purpose: The purpose of this document analysis study was to examine current teacher evaluation systems, understand current practices, and determine whether the instrumentation is a valid measure of teaching quality as reflected in teacher behavior and effectiveness specific to physical education (PE). Method: An interpretive document analysis…

  12. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  13. Image analysis for material characterisation

    Science.gov (United States)

    Livens, Stefan

    In this thesis, a number of image analysis methods are presented as solutions to two applications concerning the characterisation of materials. Firstly, we deal with the characterisation of corrosion images, which is handled using a multiscale texture analysis method based on wavelets. We propose a feature transformation that deals with the problem of rotation invariance. Classification is performed with a Learning Vector Quantisation neural network and with combination of outputs. In an experiment, 86,2% of the images showing either pit formation or cracking, are correctly classified. Secondly, we develop an automatic system for the characterisation of silver halide microcrystals. These are flat crystals with a triangular or hexagonal base and a thickness in the 100 to 200 nm range. A light microscope is used to image them. A novel segmentation method is proposed, which allows to separate agglomerated crystals. For the measurement of shape, the ratio between the largest and the smallest radius yields the best results. The thickness measurement is based on the interference colours that appear for light reflected at the crystals. The mean colour of different thickness populations is determined, from which a calibration curve is derived. With this, the thickness of new populations can be determined accurately.

  14. Analysis of Documentation Speed Using Web-Based Medical Speech Recognition Technology: Randomized Controlled Trial.

    Science.gov (United States)

    Vogel, Markus; Kaisers, Wolfgang; Wassmuth, Ralf; Mayatepek, Ertan

    2015-11-03

    Clinical documentation has undergone a change due to the usage of electronic health records. The core element is to capture clinical findings and document therapy electronically. Health care personnel spend a significant portion of their time on the computer. Alternatives to self-typing, such as speech recognition, are currently believed to increase documentation efficiency and quality, as well as satisfaction of health professionals while accomplishing clinical documentation, but few studies in this area have been published to date. This study describes the effects of using a Web-based medical speech recognition system for clinical documentation in a university hospital on (1) documentation speed, (2) document length, and (3) physician satisfaction. Reports of 28 physicians were randomized to be created with (intervention) or without (control) the assistance of a Web-based system of medical automatic speech recognition (ASR) in the German language. The documentation was entered into a browser's text area and the time to complete the documentation including all necessary corrections, correction effort, number of characters, and mood of participant were stored in a database. The underlying time comprised text entering, text correction, and finalization of the documentation event. Participants self-assessed their moods on a scale of 1-3 (1=good, 2=moderate, 3=bad). Statistical analysis was done using permutation tests. The number of clinical reports eligible for further analysis stood at 1455. Out of 1455 reports, 718 (49.35%) were assisted by ASR and 737 (50.65%) were not assisted by ASR. Average documentation speed without ASR was 173 (SD 101) characters per minute, while it was 217 (SD 120) characters per minute using ASR. The overall increase in documentation speed through Web-based ASR assistance was 26% (P=.04). Participants documented an average of 356 (SD 388) characters per report when not assisted by ASR and 649 (SD 561) characters per report when assisted

  15. Organ donation in the ICU: A document analysis of institutional policies, protocols, and order sets.

    Science.gov (United States)

    Oczkowski, Simon J W; Centofanti, John E; Durepos, Pamela; Arseneau, Erika; Kelecevic, Julija; Cook, Deborah J; Meade, Maureen O

    2018-04-01

    To better understand how local policies influence organ donation rates. We conducted a document analysis of our ICU organ donation policies, protocols and order sets. We used a systematic search of our institution's policy library to identify documents related to organ donation. We used Mindnode software to create a publication timeline, basic statistics to describe document characteristics, and qualitative content analysis to extract document themes. Documents were retrieved from Hamilton Health Sciences, an academic hospital system with a high volume of organ donation, from database inception to October 2015. We retrieved 12 active organ donation documents, including six protocols, two policies, two order sets, and two unclassified documents, a majority (75%) after the introduction of donation after circulatory death in 2006. Four major themes emerged: organ donation process, quality of care, patient and family-centred care, and the role of the institution. These themes indicate areas where documented institutional standards may be beneficial. Further research is necessary to determine the relationship of local policies, protocols, and order sets to actual organ donation practices, and to identify barriers and facilitators to improving donation rates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Electronic Document Imaging and Optical Storage Systems for Local Governments: An Introduction. Local Government Records Technical Information Series. Number 21.

    Science.gov (United States)

    Schwartz, Stanley F.

    This publication introduces electronic document imaging systems and provides guidance for local governments in New York in deciding whether such systems should be adopted for their own records and information management purposes. It advises local governments on how to develop plans for using such technology by discussing its advantages and…

  17. Standardized cine-loop documentation in abdominal ultrasound facilitates offline image interpretation.

    Science.gov (United States)

    Dormagen, Johann Baptist; Gaarder, Mario; Drolsum, Anders

    2015-01-01

    One of the main disadvantages of conventional ultrasound is its operator dependency, which might impede the reproducibility of the sonographic findings. A new approach with cine-loops and standardized scan protocols can overcome this drawback. To compare abdominal ultrasound findings of immediate bedside reading by performing radiologist with offline reading by a non-performing radiologist, using standardized cine-loop sequences. Over a 6-month period, three radiologists performed 140 dynamic ultrasound organ-based examinations in 43 consecutive outpatients. Examination protocols were standardized and included predefined probe position and sequences of short cine-loops of the liver, gallbladder, pancreas, kidneys, and urine bladder, covering the organs completely in two planes. After bedside examinations, the studies were reviewed and read out immediately by the performing radiologist. Image quality was registered from 1 (no diagnostic value) to 5 (excellent cine-loop quality). Offline reading was performed blinded by a radiologist who had not performed the examination. Bedside and offline reading were compared with each other and with consensus results. In 140 examinations, consensus reading revealed 21 cases with renal disorders, 17 cases with liver and bile pathology, and four cases with bladder pathology. Overall inter-observer agreement was 0.73 (95% CI 0.61-0.91), with lowest agreement for findings of the urine bladder (0.36) and highest agreement in liver examinations (0.90). Disagreements between the two readings were seen in nine kidneys, three bladder examinations, one pancreas and bile system examinations each, and in one liver, giving a total number of mismatches of 11%. Nearly all cases of mismatch were of minor clinical significance. The median image quality was 3 (range, 2-5) with most examinations deemed a quality of 3. Compared to consensus reading, overall accuracy was 96% for bedside reading and 94% for offline reading. Standardized cine

  18. Public health human resources: a comparative analysis of policy documents in two Canadian provinces.

    Science.gov (United States)

    Regan, Sandra; MacDonald, Marjorie; Allan, Diane E; Martin, Cheryl; Peroff-Johnston, Nancy

    2014-02-24

    Amidst concerns regarding the capacity of the public health system to respond rapidly and appropriately to threats such as pandemics and terrorism, along with changing population health needs, governments have focused on strengthening public health systems. A key factor in a robust public health system is its workforce. As part of a nationally funded study of public health renewal in Canada, a policy analysis was conducted to compare public health human resources-relevant documents in two Canadian provinces, British Columbia (BC) and Ontario (ON), as they each implement public health renewal activities. A content analysis of policy and planning documents from government and public health-related organizations was conducted by a research team comprised of academics and government decision-makers. Documents published between 2003 and 2011 were accessed (BC = 27; ON = 20); documents were either publicly available or internal to government and excerpted with permission. Documentary texts were deductively coded using a coding template developed by the researchers based on key health human resources concepts derived from two national policy documents. Documents in both provinces highlighted the importance of public health human resources planning and policies; this was particularly evident in early post-SARS documents. Key thematic areas of public health human resources identified were: education, training, and competencies; capacity; supply; intersectoral collaboration; leadership; public health planning context; and priority populations. Policy documents in both provinces discussed the importance of an educated, competent public health workforce with the appropriate skills and competencies for the effective and efficient delivery of public health services. This policy analysis identified progressive work on public health human resources policy and planning with early documents providing an inventory of issues to be addressed and later documents providing

  19. Public health human resources: a comparative analysis of policy documents in two Canadian provinces

    Science.gov (United States)

    2014-01-01

    Background Amidst concerns regarding the capacity of the public health system to respond rapidly and appropriately to threats such as pandemics and terrorism, along with changing population health needs, governments have focused on strengthening public health systems. A key factor in a robust public health system is its workforce. As part of a nationally funded study of public health renewal in Canada, a policy analysis was conducted to compare public health human resources-relevant documents in two Canadian provinces, British Columbia (BC) and Ontario (ON), as they each implement public health renewal activities. Methods A content analysis of policy and planning documents from government and public health-related organizations was conducted by a research team comprised of academics and government decision-makers. Documents published between 2003 and 2011 were accessed (BC = 27; ON = 20); documents were either publicly available or internal to government and excerpted with permission. Documentary texts were deductively coded using a coding template developed by the researchers based on key health human resources concepts derived from two national policy documents. Results Documents in both provinces highlighted the importance of public health human resources planning and policies; this was particularly evident in early post-SARS documents. Key thematic areas of public health human resources identified were: education, training, and competencies; capacity; supply; intersectoral collaboration; leadership; public health planning context; and priority populations. Policy documents in both provinces discussed the importance of an educated, competent public health workforce with the appropriate skills and competencies for the effective and efficient delivery of public health services. Conclusion This policy analysis identified progressive work on public health human resources policy and planning with early documents providing an inventory of issues to be

  20. Language Ideology or Language Practice? An Analysis of Language Policy Documents at Swedish Universities

    Science.gov (United States)

    Björkman, Beyza

    2014-01-01

    This article presents an analysis and interpretation of language policy documents from eight Swedish universities with regard to intertextuality, authorship and content analysis of the notions of language practices and English as a lingua franca (ELF). The analysis is then linked to Spolsky's framework of language policy, namely language…

  1. Planning applications in image analysis

    Science.gov (United States)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  2. Analysis of a risk prevention document using dependability techniques: a first step towards an effectiveness model

    Science.gov (United States)

    Ferrer, Laetitia; Curt, Corinne; Tacnet, Jean-Marc

    2018-04-01

    Major hazard prevention is a main challenge given that it is specifically based on information communicated to the public. In France, preventive information is notably provided by way of local regulatory documents. Unfortunately, the law requires only few specifications concerning their content; therefore one can question the impact on the general population relative to the way the document is concretely created. Ergo, the purpose of our work is to propose an analytical methodology to evaluate preventive risk communication document effectiveness. The methodology is based on dependability approaches and is applied in this paper to the Document d'Information Communal sur les Risques Majeurs (DICRIM; in English, Municipal Information Document on Major Risks). DICRIM has to be made by mayors and addressed to the public to provide information on major hazards affecting their municipalities. An analysis of law compliance of the document is carried out thanks to the identification of regulatory detection elements. These are applied to a database of 30 DICRIMs. This analysis leads to a discussion on points such as usefulness of the missing elements. External and internal function analysis permits the identification of the form and content requirements and service and technical functions of the document and its components (here its sections). Their results are used to carry out an FMEA (failure modes and effects analysis), which allows us to define the failure and to identify detection elements. This permits the evaluation of the effectiveness of form and content of each components of the document. The outputs are validated by experts from the different fields investigated. Those results are obtained to build, in future works, a decision support model for the municipality (or specialised consulting firms) in charge of drawing up documents.

  3. Analysis of a risk prevention document using dependability techniques: a first step towards an effectiveness model

    Directory of Open Access Journals (Sweden)

    L. Ferrer

    2018-04-01

    Full Text Available Major hazard prevention is a main challenge given that it is specifically based on information communicated to the public. In France, preventive information is notably provided by way of local regulatory documents. Unfortunately, the law requires only few specifications concerning their content; therefore one can question the impact on the general population relative to the way the document is concretely created. Ergo, the purpose of our work is to propose an analytical methodology to evaluate preventive risk communication document effectiveness. The methodology is based on dependability approaches and is applied in this paper to the Document d'Information Communal sur les Risques Majeurs (DICRIM; in English, Municipal Information Document on Major Risks. DICRIM has to be made by mayors and addressed to the public to provide information on major hazards affecting their municipalities. An analysis of law compliance of the document is carried out thanks to the identification of regulatory detection elements. These are applied to a database of 30 DICRIMs. This analysis leads to a discussion on points such as usefulness of the missing elements. External and internal function analysis permits the identification of the form and content requirements and service and technical functions of the document and its components (here its sections. Their results are used to carry out an FMEA (failure modes and effects analysis, which allows us to define the failure and to identify detection elements. This permits the evaluation of the effectiveness of form and content of each components of the document. The outputs are validated by experts from the different fields investigated. Those results are obtained to build, in future works, a decision support model for the municipality (or specialised consulting firms in charge of drawing up documents.

  4. Quantitative image analysis of synovial tissue

    NARCIS (Netherlands)

    van der Hall, Pascal O.; Kraan, Maarten C.; Tak, Paul Peter

    2007-01-01

    Quantitative image analysis is a form of imaging that includes microscopic histological quantification, video microscopy, image analysis, and image processing. Hallmarks are the generation of reliable, reproducible, and efficient measurements via strict calibration and step-by-step control of the

  5. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  6. Automated image analysis of uterine cervical images

    Science.gov (United States)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  7. The practical implementation of integrated safety management for nuclear safety analysis and fire hazards analysis documentation

    International Nuclear Information System (INIS)

    COLLOPY, M.T.

    1999-01-01

    the integrated safety management system approach for having a uniform and consistent process: a method has been suggested by the U S . Department of Energy at Richland and the Project Hanford Procedures when fire hazard analyses and safety analyses are required. This process provides for a common basis approach in the development of the fire hazard analysis and the safety analysis. This process permits the preparers of both documents to jointly participate in the development of the hazard analysis process. This paper presents this method to implement the integrated safety management approach in the development of the fire hazard analysis and safety analysis that provides consistency of assumptions. consequences, design considerations, and other controls necessarily to protect workers, the public. and the environment

  8. Essential issues in the design of shared document/image libraries

    Science.gov (United States)

    Gladney, Henry M.; Mantey, Patrick E.

    1990-08-01

    We consider what is needed to create electronic document libraries which mimic physical collections of books, papers, and other media. The quantitative measures of merit for personal workstations-cost, speed, size of volatile and persistent storage-will improve by at least an order ofmagnitude in the next decade. Every professional worker will be able to afford a very powerful machine, but databases and libraries are not really economical and useful unless they are shared. We therefore see a two-tier world emerging, in which custodians of information make it available to network-attached workstations. A client-server model is the natural description of this world. In collaboration with several state governments, we have considered what would be needed to replace paper-based record management for a dozen different applications. We find that a professional worker can anticipate most data needs and that (s)he is interested in each clump of data for a period of days to months. We further find that only a small fraction of any collection will be used in any period. Given expected bandwidths, data sizes, search times and costs, and other such parameters, an effective strategy to support user interaction is to bring large clumps from their sources, to transform them into convenient representations, and only then start whatever investigation is intended. A system-managed hierarchy of caches and archives is indicated. Each library is a combination of a catalog and a collection, and each stored item has a primary instance which is the standard by which the correctness of any copy is judged. Catalog records mostly refer to 1 to 3 stored items. Weighted by the number of bytes to be stored, immutable data dominate collections. These characteristics affect how consistency, currency, and access control of replicas distributed in the network should be managed. We present the large features of a design for network docun1ent/image library services. A prototype is being built for

  9. A secret-sharing-based method for authentication of grayscale document images via the use of the PNG image with a data repair capability.

    Science.gov (United States)

    Lee, Che-Wei; Tsai, Wen-Hsiang

    2012-01-01

    A new blind authentication method based on the secret sharing technique with a data repair capability for grayscale document images via the use of the Portable Network Graphics (PNG) image is proposed. An authentication signal is generated for each block of a grayscale document image, which, together with the binarized block content, is transformed into several shares using the Shamir secret sharing scheme. The involved parameters are carefully chosen so that as many shares as possible are generated and embedded into an alpha channel plane. The alpha channel plane is then combined with the original grayscale image to form a PNG image. During the embedding process, the computed share values are mapped into a range of alpha channel values near their maximum value of 255 to yield a transparent stego-image with a disguise effect. In the process of image authentication, an image block is marked as tampered if the authentication signal computed from the current block content does not match that extracted from the shares embedded in the alpha channel plane. Data repairing is then applied to each tampered block by a reverse Shamir scheme after collecting two shares from unmarked blocks. Measures for protecting the security of the data hidden in the alpha channel are also proposed. Good experimental results prove the effectiveness of the proposed method for real applications.

  10. Image Analysis for X-ray Imaging of Food

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    for quality and safety evaluation of food products. In this effort the fields of statistics, image analysis and statistical learning are combined, to provide analytical tools for determining the aforementioned food traits. The work demonstrated includes a quantitative analysis of heat induced changes......X-ray imaging systems are increasingly used for quality and safety evaluation both within food science and production. They offer non-invasive and nondestructive penetration capabilities to image the inside of food. This thesis presents applications of a novel grating-based X-ray imaging technique...... and defect detection in food. Compared to the complex three dimensional analysis of microstructure, here two dimensional images are considered, making the method applicable for an industrial setting. The advantages obtained by grating-based imaging are compared to conventional X-ray imaging, for both foreign...

  11. Energy analysis handbook. CAC document 214. [Combining process analysis with input-output analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bullard, C. W.; Penner, P. S.; Pilati, D. A.

    1976-10-01

    Methods are presented for calculating the energy required, directly and indirectly, to produce all types of goods and services. Procedures for combining process analysis with input-output analysis are described. This enables the analyst to focus data acquisition cost-effectively, and to achieve a specified degree of accuracy in the results. The report presents sample calculations and provides the tables and charts needed to perform most energy cost calculations, including the cost of systems for producing or conserving energy.

  12. Ultrasonic image analysis and image-guided interventions.

    Science.gov (United States)

    Noble, J Alison; Navab, Nassir; Becher, H

    2011-08-06

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research.

  13. Vaccine Images on Twitter: Analysis of What Images are Shared.

    Science.gov (United States)

    Chen, Tao; Dredze, Mark

    2018-04-03

    Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet's textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. ©Tao Chen, Mark Dredze. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.04.2018.

  14. Vaccine Images on Twitter: Analysis of What Images are Shared

    Science.gov (United States)

    Dredze, Mark

    2018-01-01

    Background Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. Objective The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. Methods We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Results Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet’s textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. Conclusions We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. PMID:29615386

  15. Violência contra os idosos: análise documental Violencia contra los ancianos: análisis documental Violence against the aged: documental analysis

    Directory of Open Access Journals (Sweden)

    Jacy Aurélia Vieira de Souza

    2007-06-01

    Full Text Available O objetivo do estudo foi analisar os dados de violência e maus-tratos contra os idosos por meio de documentos oficiais, em Fortaleza-CE. Estudo retrospectivo, documental realizada em uma instituição de referência do Ceará, oficializada em denúncias de violência contra idosos. A coleta de dados ocorreu no primeiro semestre de 2005. Dos 424 documentos analisados, 284(67% identificou-se como abandono dos idosos. Quanto ao agressor, 207(49% era filho da vítima. Dentre os casos de violências, 161 (38% foi negligência, seguido por apropriação indébita de aposentadoria, 114 (27%; agressão verbal, 79(19% e física 68(16%. Tais eventos foram registrados por meio de denúncias, principalmente, ao serviço Alô-Idoso, 306(77%. Pôde-se verificar a importância de serviços voltados para essa questão, porém torna-se fundamental que políticas públicas enfoquem o papel social do idoso e privilegiem o cuidado e a proteção dessa parcela populacional em suas famílias e instituições.Lo estudio tuvo como objetivo analizar los datos sobre violencia contra los ancianos en la contexto familiar. Se realizó un estudio documental retrospectivo en una institución de referencia de Ceará, con oficialización en reclamaciones de violencia contra los ancianos. Se coletaron los datos en enero a julio de 2005. Se compilaron 424 datos. Se constató que 284 (67%, caso de violencia ocurrieron a los ancianos de sexo femenino. Con relación a los agresores, 207 (49%, eran hijos de las víctimas. Entre los tipos de violencia, 161(38% son negligencia, 114 (27%, la apropiación indebida de lo jubilación; la agresión verbal, 79 (19%, y la agresión físico, 68 (16%. Se ha podido identificar la importancia de servicio a cerca de la cuestión, no obstante, se resulta fundamental que políticas públicas de ese porcentaje de la población proyecten la función social del anciano, a sí convalorar el cuidado y la protección de ese porcentaje poblacional en sus

  16. Issues and Images: New Yorkers during the Thirties. A Teaching Packet of Historical Documents.

    Science.gov (United States)

    New York State Education Dept., Albany. Cultural Education Center.

    Derived from an exhibit produced cooperatively by the New York State Archives and the New York State Museum for the Franklin D. Roosevelt Centennial, and designed to provide secondary students with first-hand exposure to New York during the Great Depression, this packet contains a teacher's guide and 22 facsimile documents, including historic…

  17. Introduction to the Multifractal Analysis of Images

    OpenAIRE

    Lévy Véhel , Jacques

    1998-01-01

    International audience; After a brief review of some classical approaches in image segmentation, the basics of multifractal theory and its application to image analysis are presented. Practical methods for multifractal spectrum estimation are discussed and some experimental results are given.

  18. Tolerance analysis through computational imaging simulations

    Science.gov (United States)

    Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon

    2017-11-01

    The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.

  19. Documentation and Analysis of Children's Experience: An Ongoing Collegial Activity for Early Childhood Professionals

    Science.gov (United States)

    Picchio, Mariacristina; Giovannini, Donatella; Mayer, Susanna; Musatti, Tullia

    2012-01-01

    Systematic documentation and analysis of educational practice can be a powerful tool for continuous support to the professionalism of early childhood education practitioners. This paper discusses data from a three-year action-research initiative carried out by a research agency in collaboration with a network of Italian municipal "nido"…

  20. The Role of Business Agreements in Defining Textbook Affordability and Digital Materials: A Document Analysis

    Science.gov (United States)

    Raible, John; deNoyelles, Aimee

    2015-01-01

    Adopting digital materials such as eTextbooks and e-coursepacks is a potential strategy to address textbook affordability in the United States. However, university business relationships with bookstore vendors implicitly structure which instructional resources are available and in what manner. In this study, a document analysis was conducted on…

  1. Standardization of Image Quality Analysis – ISO 19264

    DEFF Research Database (Denmark)

    Wüller, Dietmar; Kejser, Ulla Bøgvad

    2016-01-01

    There are a variety of image quality analysis tools available for the archiving world, which are based on different test charts and analysis algorithms. ISO has formed a working group in 2012 to harmonize these approaches and create a standard way of analyzing the image quality for archiving...... systems. This has resulted in three documents that have been or are going to be published soon. ISO 19262 defines the terms used in the area of image capture to unify the language. ISO 19263 describes the workflow issues and provides detailed information on how the measurements are done. Last...... but not least ISO 19264 describes the measurements in detail and provides aims and tolerance levels for the different aspects. This paper will present the new ISO 19264 technical specification to analyze image quality based on a single capture of a multi-pattern test chart, and discuss the reasoning behind its...

  2. Comparisons of images simultaneously documented by digital subtraction coronary arteriography and cine coronary arteriography

    International Nuclear Information System (INIS)

    Kimura, Koji; Takamiya, Makoto; Yamamoto, Kazuo; Ohta, Mitsushige; Naito, Hiroaki

    1988-01-01

    Using an angiography apparatus capable of simultaneously processing digital subtraction angiograms and cine angiograms, the diagnostic capabilities of both methods for the coronary arteries (DSCAG and Cine-CAG) were compared. Twenty stenotic lesions of the coronary arteries of 11 patients were evaluated using both modalities. The severity of stenosis using DSCAG with a 512x512x8 bit matrix was semiautomatically measured on the cathode ray tube (CRT) based on enlarged images on the screen of a Vanguard cine projector which were of the same size as those of or 10 times larger than images of Cine-CAG. The negative and positive hard copies of DSCAG images were also compared with those of Cine-CAG. The correlation coefficients of the severity of stenosis by DSCAG and Cine-CAG were as follows: (1) the same size DSCAG images on CRT to Cine-CAG, 0.95, (2) 10 times enlarged DSCAG images on CRT to Cine-CAG, 0.96, and (3) the same size DSCAG images on negative and positive hard copies to Cine-CAG, 0.97. The semiautomatically measured values of 10 times enlarged DSCAG images on CRT and the manually measured values of the same size negative and positive DSCAG images in hard copy closely correlated with the values measured using Cine-CAG. When the liver was superimposed in the long-axis projection, the diagnostic capabilities of DSCAG and Cine-CAG were compared. The materials included 10 left coronary arteriograms and 11 right coronary arteriograms. Diagnostically, DSCAG was more useful than Cine-CAG in the long-axis projection. (author)

  3. HWNet v2: An Efficient Word Image Representation for Handwritten Documents

    OpenAIRE

    Krishnan, Praveen; Jawahar, C. V.

    2018-01-01

    We present a framework for learning efficient holistic representation for handwritten word images. The proposed method uses a deep convolutional neural network with traditional classification loss. The major strengths of our work lie in: (i) the efficient usage of synthetic data to pre-train a deep network, (ii) an adapted version of ResNet-34 architecture with region of interest pooling (referred as HWNet v2) which learns discriminative features with variable sized word images, and (iii) rea...

  4. Similarity analysis between quantum images

    Science.gov (United States)

    Zhou, Ri-Gui; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-06-01

    Similarity analyses between quantum images are so essential in quantum image processing that it provides fundamental research for the other fields, such as quantum image matching, quantum pattern recognition. In this paper, a quantum scheme based on a novel quantum image representation and quantum amplitude amplification algorithm is proposed. At the end of the paper, three examples and simulation experiments show that the measurement result must be 0 when two images are same, and the measurement result has high probability of being 1 when two images are different.

  5. An Implementation of Document Image Reconstruction System on a Smart Device Using a 1D Histogram Calibration Algorithm

    Directory of Open Access Journals (Sweden)

    Lifeng Zhang

    2014-01-01

    Full Text Available In recent years, the smart devices equipped with imaging functions are widely spreading for consumer application. It is very convenient for people to record information using these devices. For example, people can photo one page of a book in a library or they can capture an interesting piece of news on the bulletin board when walking on the street. But sometimes, one shot full area image cannot give a sufficient resolution for OCR soft or for human visual recognition. Therefore, people would prefer to take several partial character images of a readable size and then stitch them together in an efficient way. In this study, we propose a print document acquisition method using a device with a video camera. A one-dimensional histogram based self-calibration algorithm is developed for calibration. Because the calculation cost is low, it can be installed on a smartphone. The simulation result shows that the calibration and stitching are well performed.

  6. Content analysis to detect high stress in oral interviews and text documents

    Science.gov (United States)

    Thirumalainambi, Rajkumar (Inventor); Jorgensen, Charles C. (Inventor)

    2012-01-01

    A system of interrogation to estimate whether a subject of interrogation is likely experiencing high stress, emotional volatility and/or internal conflict in the subject's responses to an interviewer's questions. The system applies one or more of four procedures, a first statistical analysis, a second statistical analysis, a third analysis and a heat map analysis, to identify one or more documents containing the subject's responses for which further examination is recommended. Words in the documents are characterized in terms of dimensions representing different classes of emotions and states of mind, in which the subject's responses that manifest high stress, emotional volatility and/or internal conflict are identified. A heat map visually displays the dimensions manifested by the subject's responses in different colors, textures, geometric shapes or other visually distinguishable indicia.

  7. FEASIBILITY STUDY OF LOW-COST IMAGE-BASED HERITAGE DOCUMENTATION IN NEPAL

    OpenAIRE

    Dhonju, H. K.; Xiao, W.; Sarhosis, V.; Mills, J. P.; Wilkinson, S.; Wang, Z.; Thapa, L.; Panday, U. S.

    2017-01-01

    Cultural heritage structural documentation is of great importance in terms of historical preservation, tourism, educational and spiritual values. Cultural heritage across the world, and in Nepal in particular, is at risk from various natural hazards (e.g. earthquakes, flooding, rainfall etc), poor maintenance and preservation, and even human destruction. This paper evaluates the feasibility of low-cost photogrammetric modelling cultural heritage sites, and explores the practicality o...

  8. Analysis of laser and inkjet prints using spectroscopic methods for forensic identification of questioned documents

    OpenAIRE

    Gál, Lukáš; Belovičová, Michaela; Oravec, Michal; Palková, Miroslava; Čeppan, Michal

    2013-01-01

    The spectral properties in UV-VIS-NIR and IR regions of laser and inkjet prints were studied for the purposes of forensic analysis of documents. The procedures of measurements and processing of spectra of printed documents using fibre optics reflectance spectroscopy in UV-VIS and NIR region, FTIR-ATR with diamond/ZnSe and germanium crystals were optimized. It was found that the shapes of spectra of various black laser jet prints and inkjet prints generally differ in the spectral regions...

  9. The Analysis of Heterogeneous Text Documents with the Help of the Computer Program NUD*IST

    Directory of Open Access Journals (Sweden)

    Christine Plaß

    2000-12-01

    Full Text Available On the basis of a current research project we discuss the use of the computer program NUD*IST for the analysis and archiving of qualitative documents. Our project examines the social evaluation of spectacular criminal offenses and we identify, digitize and analyze documents from the entire 20th century. Since public and scientific discourses are examined, the data of the project are extraordinarily heterogeneous: scientific publications, court records, newspaper reports, and administrative documents. We want to show how to transfer general questions into a systematic categorization with the assistance of NUD*IST. Apart from the functions, possibilities and limitations of the application of NUD*IST, concrete work procedures and difficulties encountered are described. URN: urn:nbn:de:0114-fqs0003211

  10. Image analysis and microscopy: a useful combination

    Directory of Open Access Journals (Sweden)

    Pinotti L.

    2009-01-01

    Full Text Available The TSE Roadmap published in 2005 (DG for Health and Consumer Protection, 2005 suggests that short and medium term (2005-2009 amendments to control BSE policy should include “a relaxation of certain measures of the current total feed ban when certain conditions are met”. The same document noted “the starting point when revising the current feed ban provisions should be risk-based but at the same time taking into account the control tools in place to evaluate and ensure the proper implementation of this feed ban”. The clear implication is that adequate analytical methods to detect constituents of animal origin in feedstuffs are required. The official analytical method for the detection of constituents of animal origin in feedstuffs is the microscopic examination technique as described in Commission Directive 2003/126/EC of 23 December 2003 [OJ L 339, 24.12.2003, 78]. Although the microscopic method is usually able to distinguish fish from land animal material, it is often unable to distinguish between different terrestrial animals. Fulfillments of the requirements of Regulation 1774/2002/EC laying down health rules concerning animal by-products not intended for human consumption, clearly implies that it must be possible to identify the origin animal materials, at higher taxonomic levels than in the past. Thus improvements in all methods of detecting constituents of animal origin are required, including the microscopic method. This article will examine the problem of meat and bone meal in animal feeds, and the use of microscopic methods in association with computer image analysis to identify the source species of these feedstuff contaminants. Image processing, integrated with morphometric measurements can provide accurate and reliable results and can be a very useful aid to the analyst in the characterization, analysis and control of feedstuffs.

  11. [A new concept for integration of image databanks into a comprehensive patient documentation].

    Science.gov (United States)

    Schöll, E; Holm, J; Eggli, S

    2001-05-01

    Image processing and archiving are of increasing importance in the practice of modern medicine. Particularly due to the introduction of computer-based investigation methods, physicians are dealing with a wide variety of analogue and digital picture archives. On the other hand, clinical information is stored in various text-based information systems without integration of image components. The link between such traditional medical databases and picture archives is a prerequisite for efficient data management as well as for continuous quality control and medical education. At the Department of Orthopedic Surgery, University of Berne, a software program was developed to create a complete multimedia electronic patient record. The client-server system contains all patients' data, questionnaire-based quality control, and a digital picture archive. Different interfaces guarantee the integration into the hospital's data network. This article describes our experiences in the development and introduction of a comprehensive image archiving system at a large orthopedic center.

  12. Image registration with uncertainty analysis

    Science.gov (United States)

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  13. Forensic intelligence applied to questioned document analysis: A model and its application against organized crime.

    Science.gov (United States)

    De Alcaraz-Fossoul, Josep; Roberts, Katherine A

    2017-07-01

    The capability of forensic sciences to fight crime, especially against organized criminal groups, becomes relevant in the recent economic downturn and the war on terrorism. In view of these societal challenges, the methods of combating crime should experience critical changes in order to improve the effectiveness and efficiency of the current resources available. It is obvious that authorities have serious difficulties combating criminal groups of transnational nature. These are characterized as well structured organizations with international connections, abundant financial resources and comprised of members with significant and diverse expertise. One common practice among organized criminal groups is the use of forged documents that allow for the commission of illegal cross-border activities. Law enforcement can target these movements to identify counterfeits and establish links between these groups. Information on document falsification can become relevant to generate forensic intelligence and to design new strategies against criminal activities of this nature and magnitude. This article discusses a methodology for improving the development of forensic intelligence in the discipline of questioned document analysis. More specifically, it focuses on document forgeries and falsification types used by criminal groups. It also describes the structure of international criminal organizations that use document counterfeits as means to conduct unlawful activities. The model presented is partially based on practical applications of the system that have resulted in satisfactory outcomes in our laboratory. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  14. Analysis of Documents Published in Scopus Database on Foreign Language Learning through Mobile Learning: A Content Analysis

    Science.gov (United States)

    Uzunboylu, Huseyin; Genc, Zeynep

    2017-01-01

    The purpose of this study is to determine the recent trends in foreign language learning through mobile learning. The study was conducted employing document analysis and related content analysis among the qualitative research methodology. Through the search conducted on Scopus database with the key words "mobile learning and foreign language…

  15. Promotion of physical activity in the European region: content analysis of 27 national policy documents

    DEFF Research Database (Denmark)

    Daugbjerg, Signe B; Kahlmeier, Sonja; Racioppi, Francesca

    2009-01-01

    . Population groups most in need such as people with low levels of physical activity were rarely specifically targeted. Most policies emphasized the importance of an evaluation. However, only about half of them indicated a related intention or requirement. CONCLUSION: In recent years there has been......BACKGROUND: Over the past years there has been increasing interest in physical activity promotion and the development of appropriate policy. So far, there has been no comprehensive overview of the activities taking place in Europe in this area of public health policy. METHODS: Using different...... search methods, 49 national policy documents on physical activity promotion were identified. An analysis grid covering key features was developed for the analysis of the 27 documents published in English. RESULTS: Analysis showed that many general recommendations for policy developments are being...

  16. Transfer function analysis of radiographic imaging systems

    International Nuclear Information System (INIS)

    Metz, C.E.; Doi, K.

    1979-01-01

    The theoretical and experimental aspects of the techniques of transfer function analysis used in radiographic imaging systems are reviewed. The mathematical principles of transfer function analysis are developed for linear, shift-invariant imaging systems, for the relation between object and image and for the image due to a sinusoidal plane wave object. The other basic mathematical principle discussed is 'Fourier analysis' and its application to an input function. Other aspects of transfer function analysis included are alternative expressions for the 'optical transfer function' of imaging systems and expressions are derived for both serial and parallel transfer image sub-systems. The applications of transfer function analysis to radiographic imaging systems are discussed in relation to the linearisation of the radiographic imaging system, the object, the geometrical unsharpness, the screen-film system unsharpness, other unsharpness effects and finally noise analysis. It is concluded that extensive theoretical, computer simulation and experimental studies have demonstrated that the techniques of transfer function analysis provide an accurate and reliable means for predicting and understanding the effects of various radiographic imaging system components in most practical diagnostic medical imaging situations. (U.K.)

  17. Microscopy image segmentation tool: Robust image data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Valmianski, Ilya, E-mail: ivalmian@ucsd.edu; Monton, Carlos; Schuller, Ivan K. [Department of Physics and Center for Advanced Nanoscience, University of California San Diego, 9500 Gilman Drive, La Jolla, California 92093 (United States)

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  18. Microscopy image segmentation tool: Robust image data analysis

    Science.gov (United States)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  19. Microscopy image segmentation tool: Robust image data analysis

    International Nuclear Information System (INIS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-01-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy

  20. Annotating image ROIs with text descriptions for multimodal biomedical document retrieval

    Science.gov (United States)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Regions of interest (ROIs) that are pointed to by overlaid markers (arrows, asterisks, etc.) in biomedical images are expected to contain more important and relevant information than other regions for biomedical article indexing and retrieval. We have developed several algorithms that localize and extract the ROIs by recognizing markers on images. Cropped ROIs then need to be annotated with contents describing them best. In most cases accurate textual descriptions of the ROIs can be found from figure captions, and these need to be combined with image ROIs for annotation. The annotated ROIs can then be used to, for example, train classifiers that separate ROIs into known categories (medical concepts), or to build visual ontologies, for indexing and retrieval of biomedical articles. We propose an algorithm that pairs visual and textual ROIs that are extracted from images and figure captions, respectively. This algorithm based on dynamic time warping (DTW) clusters recognized pointers into groups, each of which contains pointers with identical visual properties (shape, size, color, etc.). Then a rule-based matching algorithm finds the best matching group for each textual ROI mention. Our method yields a precision and recall of 96% and 79%, respectively, when ground truth textual ROI data is used.

  1. A document-driven method for certifying scientific computing software for use in nuclear safety analysis

    International Nuclear Information System (INIS)

    Smith, W. Spencer; Koothoor, Mimitha

    2016-01-01

    This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuel pin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification

  2. A document-driven method for certifying scientific computing software for use in nuclear safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Smith, W. Spencer; Koothoor, Mimitha [Computing and Software Department, McMaster University, Hamilton (Canada)

    2016-04-15

    This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuel pin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification.

  3. Information granules in image histogram analysis.

    Science.gov (United States)

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Analysis of 3-D images

    Science.gov (United States)

    Wani, M. Arif; Batchelor, Bruce G.

    1992-03-01

    Deriving generalized representation of 3-D objects for analysis and recognition is a very difficult task. Three types of representations based on type of an object is used in this paper. Objects which have well-defined geometrical shapes are segmented by using a fast edge region based segmentation technique. The segmented image is represented by plan and elevation of each part of the object if the object parts are symmetrical about their central axis. The plan and elevation concept enables representing and analyzing such objects quickly and efficiently. The second type of representation is used for objects having parts which are not symmetrical about their central axis. The segmented surface patches of such objects are represented by the 3-D boundary and the surface features of each segmented surface. Finally, the third type of representation is used for objects which don't have well-defined geometrical shapes (for example a loaf of bread). These objects are represented and analyzed from its features which are derived using a multiscale contour based technique. Anisotropic Gaussian smoothing technique is introduced to segment the contours at various scales of smoothing. A new merging technique is used which enables getting the current best estimate of break points at each scale. This new technique enables elimination of loss of accuracy of localization effects at coarser scales without using scale space tracking approach.

  5. In situ analysis of historical documents through a portable system of X RF

    International Nuclear Information System (INIS)

    Ruvalcaba S, J.L.; Gonzalez T, C.

    2005-01-01

    From the analysis of the documents and ancient books, the chronology of documents, the use of materials (paper, parchment, inks, pigments) and deterioration, among others aspects may be determined. Usually it is difficult to bring the object to the laboratory for analysis and it is not possible to sample (even small portions). Due to the importance of the documents characterization, it is necessary to carry out a diagnostic analysis at the library in order to establish the general nature of the materials (organic or inorganic), the main composition of inks and pigments, actual and possible deterioration. From this point of view, X-ray fluorescence analysis (X RF) with a portable system, may be used for quick non-destructive elemental composition determinations. A X RF system was specially developed at the Physics Institute (UNAM) for these purposes and it may be used out of the laboratory in libraries and museums. In this work, our X RF methodology is described and the study of inks of manuscripts from 15 Th and 16 Th centuries belonging to the National Anthropology and History Library is presented. (Author)

  6. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  7. DOCUMENTATION OF HISTORICAL UNDERGROUND OBJECT IN SKORKOV VILLAGE WITH SELECTED MEASURING METHODS, DATA ANALYSIS AND VISUALIZATION

    Directory of Open Access Journals (Sweden)

    A. Dlesk

    2016-06-01

    Full Text Available The author analyzes current methods of 3D documentation of historical tunnels in Skorkov village, which lies at the Jizera river, approximately 30 km away from Prague. The area is known as a former military camp from Thirty Years’ War in 17th Century. There is an extensive underground compound with one entrance corridor and two transverse, situated approximately 2 to 5 m under the local development. The object has been partly documented by geodetic polar method, intersection photogrammetry, image-based modelling and laser scanning. Data have been analyzed and methods have been compared. Then the 3D model of object has been created and compound with cadastral data, orthophoto, historical maps and digital surface model which was made by photogrammetric method using remotely piloted aircraft system. Then the measuring has been realized with ground penetrating radar. Data have been analyzed and the result compared with real status. All the data have been combined and visualized into one 3D model. Finally, the discussion about advantages and disadvantages of used measuring methods has been livened up. The tested methodology has been also used for other documentation of historical objects in this area. This project has been created as a part of research at EuroGV. s.r.o. Company lead by Ing. Karel Vach CSc. in cooperation with prof. Dr. Ing. Karel Pavelka from Czech Technical University in Prague and Miloš Gavenda, the renovator.

  8. Documentation of Historical Underground Object in Skorkov Village with Selected Measuring Methods, Data Analysis and Visualization

    Science.gov (United States)

    Dlesk, A.

    2016-06-01

    The author analyzes current methods of 3D documentation of historical tunnels in Skorkov village, which lies at the Jizera river, approximately 30 km away from Prague. The area is known as a former military camp from Thirty Years' War in 17th Century. There is an extensive underground compound with one entrance corridor and two transverse, situated approximately 2 to 5 m under the local development. The object has been partly documented by geodetic polar method, intersection photogrammetry, image-based modelling and laser scanning. Data have been analyzed and methods have been compared. Then the 3D model of object has been created and compound with cadastral data, orthophoto, historical maps and digital surface model which was made by photogrammetric method using remotely piloted aircraft system. Then the measuring has been realized with ground penetrating radar. Data have been analyzed and the result compared with real status. All the data have been combined and visualized into one 3D model. Finally, the discussion about advantages and disadvantages of used measuring methods has been livened up. The tested methodology has been also used for other documentation of historical objects in this area. This project has been created as a part of research at EuroGV. s.r.o. Company lead by Ing. Karel Vach CSc. in cooperation with prof. Dr. Ing. Karel Pavelka from Czech Technical University in Prague and Miloš Gavenda, the renovator.

  9. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    Directory of Open Access Journals (Sweden)

    D. Abate

    2014-06-01

    Full Text Available The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc., as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc., can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy. The goal of the project is the multi-temporal 3D documentation and monitoring of paintings – at the moment in bad conservation’s situation - and the provision of some metrics to quantify the deformations and damages.

  10. The role of records management professionals in optical disk-based document imaging systems in the petroleum industry

    International Nuclear Information System (INIS)

    Cisco, S.L.

    1992-01-01

    Analyses of the data indicated that nearly one third of the 83 companies in this study had implemented one or more document imaging systems. Companies with imaging systems mostly were large (more than 1,001 employees), and mostly were international in scope. Although records management professionals traditionally were delegated responsibility for acquiring, designing, implementing, and maintaining paper-based information systems and the records therein, when records were converted to optical disks, responsibility for acquiring, designing, implementing, and maintaining optical disk-based information systems and the records therein, was delegated more frequently to end user departments and IS/MIS/DP professionals than to records professionals. Records management professionals assert that the need of an organization for a comprehensive records management program is not served best when individuals who are not professional records managers are responsible for the records stored in optical disk-based information systems

  11. Qualitative analysis of national documents on health care services and pharmaceuticals` purchasing challenges: evidence from Iran.

    Science.gov (United States)

    Bastani, Peivand; Samadbeik, Mahnaz; Dinarvand, Rassoul; Kashefian-Naeeini, Sara; Vatankhah, Soudabeh

    2018-06-05

    Iranian health sector encountered many challenges in resource allocation and health service purchasing during the past decades, the aim of this study was to determine the main challenges of the present process of health service purchasing for national policymakers and other developing countries with the same setting. It was a qualitative study carried out via the complete content analysis of all relevant national documents from 2007 to 2014. In order to retrieve the related documents, we searched the official websites related to the Ministry of Health and Medical Education, four main Iranian insurance organizations, the Health Committee of the Parliament Profile, strategic vice president's site and Supreme Insurance Council. After recognition of documents, their credibility and authenticity were evaluated in terms of their publication or adjustment. For the analysis of documents, the four step-Scott method was used applying MAXQDA version 10. Findings illustrated that health service purchase challenges in the country can be classified in 6 main themes of policy-making, executive, intersectional, natural, legal and informational challenges with 26 subthemes. Furthermore, 5 themes of Basic Benefit Package, Reimbursement,Decision making, Technology and Contract are considered as the main Challenges in pharmaceutical purchasing area containing 13 relevant subthemes. It seems that according to documents, Iran has faced many structural and procedural problems with the purchase of the best health interventions. So it is highly recommended to consider consequences derived from the present challenges and try to use these evidences in their policy making process to decrease the existed problems and move to better procurement of health interventions.

  12. Applications of stochastic geometry in image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Kendall, W.S.; Molchanov, I.S.

    2009-01-01

    A discussion is given of various stochastic geometry models (random fields, sequential object processes, polygonal field models) which can be used in intermediate and high-level image analysis. Two examples are presented of actual image analysis problems (motion tracking in video,

  13. Global Nursing Issues and Development: Analysis of World Health Organization Documents.

    Science.gov (United States)

    Wong, Frances Kam Yuet; Liu, Huaping; Wang, Hui; Anderson, Debra; Seib, Charrlotte; Molasiotis, Alex

    2015-11-01

    To analyze World Health Organization (WHO) documents to identify global nursing issues and development. Qualitative content analysis. Documents published by the six WHO regions between 2007 and 2012 and with key words related to nurse/midwife or nursing/midwifery were included. Themes, categories, and subcategories were derived. The final coding reached 80% agreement among three independent coders, and the final coding for the discrepant coding was reached by consensus. Thirty-two documents from the regions of Europe (n = 19), the Americas (n = 6), the Western Pacific (n = 4), Africa (n = 1), the Eastern Mediterranean (n = 1), and Southeast Asia (n = 1) were examined. A total of 385 units of analysis dispersed in 31 subcategories under four themes were derived. The four themes derived (number of unit of analysis, %) were Management & Leadership (206, 53.5), Practice (75, 19.5), Education (70, 18.2), and Research (34, 8.8). The key nursing issues of concern at the global level are workforce, the impacts of nursing in health care, professional status, and education of nurses. International alliances can help advance nursing, but the visibility of nursing in the WHO needs to be strengthened. Organizational leadership is important in order to optimize the use of nursing competence in practice and inform policy makers regarding the value of nursing to promote people's health. © 2015 Sigma Theta Tau International.

  14. Airborne imaging for heritage documentation using the Fotokite tethered flying camera

    Science.gov (United States)

    Verhoeven, Geert; Lupashin, Sergei; Briese, Christian; Doneus, Michael

    2014-05-01

    Since the beginning of aerial photography, researchers used all kinds of devices (from pigeons, kites, poles, and balloons to rockets) to take still cameras aloft and remotely gather aerial imagery. To date, many of these unmanned devices are still used for what has been referred to as Low-Altitude Aerial Photography or LAAP. In addition to these more traditional camera platforms, radio-controlled (multi-)copter platforms have recently added a new aspect to LAAP. Although model airplanes have been around for several decades, the decreasing cost, increasing functionality and stability of ready-to-fly multi-copter systems has proliferated their use among non-hobbyists. As such, they became a very popular tool for aerial imaging. The overwhelming amount of currently available brands and types (heli-, dual-, tri-, quad-, hexa-, octo-, dodeca-, deca-hexa and deca-octocopters), together with the wide variety of navigation options (e.g. altitude and position hold, waypoint flight) and camera mounts indicate that these platforms are here to stay for some time. Given the multitude of still camera types and the image quality they are currently capable of, endless combinations of low- and high-cost LAAP solutions are available. In addition, LAAP allows for the exploitation of new imaging techniques, as it is often only a matter of lifting the appropriate device (e.g. video cameras, thermal frame imagers, hyperspectral line sensors). Archaeologists were among the first to adopt this technology, as it provided them with a means to easily acquire essential data from a unique point of view, whether for simple illustration purposes of standing historic structures or to compute three-dimensional (3D) models and orthophotographs from excavation areas. However, even very cheap multi-copters models require certain skills to pilot them safely. Additionally, malfunction or overconfidence might lift these devices to altitudes where they can interfere with manned aircrafts. As such, the

  15. Data management, documentation and analysis systems in radiation oncology: a multi-institutional survey

    International Nuclear Information System (INIS)

    Kessel, Kerstin A.; Combs, Stephanie E.

    2015-01-01

    Recently, information availability has become more elaborate and widespread, and treatment decisions are based on a multitude of factors. Gathering relevant data, also referred to as Big Data, is therefore critical for reaching the best patient care, and enhancing interdisciplinary and clinical research. Combining patient data from all involved systems is essential to prepare unstructured data for analyses. This demands special coordination in data management. Our study aims to characterize current developments in German-speaking hospital departments and practices. We successfully conducted the survey with the members of the Deutsche Gesellschaft für Radioonkologie (DEGRO). A questionnaire was developed consisting of 17 questions related to data management, documentation and clinical trial analyses, reflecting the clinical topics such as basic patient information, imaging, follow-up information as well as connection of documentation tools with radiooncological treatment planning machines. A total of 44 institutions completed the online survey (University hospitals n = 17, hospitals n = 13, practices/institutes n = 14). University hospitals, community hospitals and private practices are equally equipped concerning IT infrastructure for clinical use. However, private practices have a low interest in research work. All respondents stated the biggest obstacles about introducing a documentation system into their unit lie in funding and support of the central IT departments. Only 27 % (12/44) of responsible persons are specialists for documentation and data management. Our study gives an understanding of the challenges and solutions we need to be looking at for medical data storage. In the future, inter-departmental cross-links will enable the radiation oncology community to generate large-scale analyses. The online version of this article (doi:10.1186/s13014-015-0543-0) contains supplementary material, which is available to authorized users

  16. ON CURRICULAR PROPOSALS OF THE PORTUGUESE LANGUAGE: A DOCUMENT ANALYSIS IN JUIZ DE FORA (MG

    Directory of Open Access Journals (Sweden)

    Tânia Guedes MAGALHÃES

    2014-12-01

    Full Text Available This paper, whose objective is to analyze two curricular proposals of Portuguese from the Juiz de Fora City Hall (2001 and 2012, is an extract from a research entitled “On text genres and teaching: a collaborative research with teachers of Portuguese” (2011/2013. Text genres have been suggested by curricular proposals as a central object for teachers who work with Portuguese language teaching; for this, it is relevant to analyze the documents in the realm of the ongoing research. As theoretical references, we used authors who propose a didactic model based on the development of language skills and linguistic reasoning (MENDONÇA, 2006 which in turn are based on an interactional conception of language (BRONCKART, 1999; SCHNEUWLY; DOLZ, 2004. Document analysis was used as methodology, which envisions assessment of pieces of information in documents as well as their outcomes. The data show that the 2012 curricular proposal is more adequate to Portuguese language teaching than the first one, mainly for its theoretical and methodological grounding, which emphasize the development of students’ linguistic and discursive skills. Guided by an interactionist notion – unlike the norm-centered 2001 proposal – the 2012 document fosters the development of linguistic reasoning and usage skills.

  17. Solar Image Analysis and Visualization

    CERN Document Server

    Ireland, J

    2009-01-01

    This volume presents a selection of papers on the state of the art of image enhancement, automated feature detection, machine learning, and visualization tools in support of solar physics that focus on the challenges presented by new ground-based and space-based instrumentation. The articles and topics were inspired by the Third Solar Image Processing Workshop, held at Trinity College Dublin, Ireland but contributions from other experts have been included as well. This book is mainly aimed at researchers and graduate students working on image processing and compter vision in astronomy and solar physics.

  18. SIMA: Python software for analysis of dynamic fluorescence imaging data

    Directory of Open Access Journals (Sweden)

    Patrick eKaifosh

    2014-09-01

    Full Text Available Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs, and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  19. Multi-Source Image Analysis.

    Science.gov (United States)

    1979-12-01

    These collections were taken to show the advantages made available to the inter- preter. In a military operation, however, often little or no in- situ ...The large body of water labeled "W" on each image represents the Agua Hedionda lagoon. East of the lagoon the area is primarily agricultural with a...power plant located in the southeast corner of the image. West of the Agua Hedionda lagoon is Carlsbad, California. Damp ground is labelled "Dg" on the

  20. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  1. [Photography as analysis document, body and medicine: theory, method and criticism--the experience of Museo Nacional de Medicina Enrique Laval].

    Science.gov (United States)

    Robinson, César Leyton; Caballero, Andrés Díaz

    2007-01-01

    This article is an experimental methodological reflection on the use of medical images as useful documents for constructing the history of medicine. A method is used that is based on guidelines or analysis topics that include different ways of viewing documents, from aesthetic, technical, social and political theories to historical and medical thinking. Some exercises are also included that enhance the proposal for the reader: rediscovering the worlds in society that harbor these medical photographical archives to obtain a new theoretical approach to the construction of the history of medical science.

  2. INTEGRATED IMAGING APPROACHES SUPPORTING THE EXCAVATION ACTIVITIES. MULTI-SCALE GEOSPATIAL DOCUMENTATION IN HIERAPOLIS (TK

    Directory of Open Access Journals (Sweden)

    A. Spanò

    2018-05-01

    Full Text Available The paper focuses on the exploration of the suitability and the discretization of applicability issues about advanced surveying integrated techniques, mainly based on image-based approaches compared and integrated to range-based ones that have been developed with the use of the cutting-edge solutions tested on field. The investigated techniques integrate both technological devices for 3D data acquisition and thus editing and management systems to handle metric models and multi-dimensional data in a geospatial perspective, in order to innovate and speed up the extraction of information during the archaeological excavation activities. These factors, have been experienced in the outstanding site of the Hierapolis of Phrygia ancient city (Turkey, downstream the 2017 surveying missions, in order to produce high-scale metric deliverables in terms of high-detailed Digital Surface Models (DSM, 3D continuous surface models and high-resolution orthoimages products. In particular, the potentialities in the use of UAV platforms for low altitude acquisitions in aerial photogrammetric approach, together with terrestrial panoramic acquisitions (Trimble V10 imaging rover, have been investigated with a comparison toward consolidated Terrestrial Laser Scanning (TLS measurements. One of the main purposes of the paper is to evaluate the results offered by the technologies used independently and using integrated approaches. A section of the study in fact, is specifically dedicated to experimenting the union of different sensor dense clouds: both dense clouds derived from UAV have been integrated with terrestrial Lidar clouds, to evaluate their fusion. Different test cases have been considered, representing typical situations that can be encountered in archaeological sites.

  3. Analysis and classification of oncology activities on the way to workflow based single source documentation in clinical information systems.

    Science.gov (United States)

    Wagner, Stefan; Beckmann, Matthias W; Wullich, Bernd; Seggewies, Christof; Ries, Markus; Bürkle, Thomas; Prokosch, Hans-Ulrich

    2015-12-22

    Today, cancer documentation is still a tedious task involving many different information systems even within a single institution and it is rarely supported by appropriate documentation workflows. In a comprehensive 14 step analysis we compiled diagnostic and therapeutic pathways for 13 cancer entities using a mixed approach of document analysis, workflow analysis, expert interviews, workflow modelling and feedback loops. These pathways were stepwise classified and categorized to create a final set of grouped pathways and workflows including electronic documentation forms. A total of 73 workflows for the 13 entities based on 82 paper documentation forms additionally to computer based documentation systems were compiled in a 724 page document comprising 130 figures, 94 tables and 23 tumour classifications as well as 12 follow-up tables. Stepwise classification made it possible to derive grouped diagnostic and therapeutic pathways for the three major classes - solid entities with surgical therapy - solid entities with surgical and additional therapeutic activities and - non-solid entities. For these classes it was possible to deduct common documentation workflows to support workflow-guided single-source documentation. Clinical documentation activities within a Comprehensive Cancer Center can likely be realized in a set of three documentation workflows with conditional branching in a modern workflow supporting clinical information system.

  4. In-service documentation tools and statements on palliative sedation in Germany--do they meet the EAPC framework recommendations? A qualitative document analysis.

    Science.gov (United States)

    Stiel, Stephanie; Heckel, Maria; Christensen, Britta; Ostgathe, Christoph; Klein, Carsten

    2016-01-01

    Numerous (inter-)national guidelines and frameworks have been developed to provide recommendations for the application of palliative sedation (PS). However, they are still not widely known, and large variations in PS clinical practice can be found. This study aims to collect and describe contents from documents used in clinical practice and to compare to what extent they match the European Association for Palliative Care (EAPC) framework recommendations. In a national survey on PS in Germany 2012, participants were asked to upload their in-service templates, assessment tools, specific protocols, and in-service statements for the application and documentation of PS. These documents are analyzed by using systematic structured content analysis. Three hundred seven content units of 52 provided documents were coded. The analyzed templates are very heterogeneous and also contain items not mentioned in the EAPC framework. Among 11 scales for the evaluation of sedation level, the Ramsey Sedation Score (n = 5) and the Richmond-Agitation-Sedation-Scale (n = 2) were found most often. For symptom assessment, three different scales were provided one time respectively. In all six PS statements, the common core elements were possible indications for PS, instructions on dose titration, patient monitoring, and care. Wide congruency exists for physical and psychological indications. Most documents coincide on midazolam as a preferred drug and basic monitoring in regular intervals. Aspects such as pre-emptive discussion of the potential role of sedation, informational needs of relatives, and care for the medical professionals are mentioned rarely. The analyzed templates do neglect some points of the EAPC recommendations. However, they expand the ten-point scheme of the framework in some details. The findings may facilitate the development of standardized consensus documentation and monitoring draft as an operational statement.

  5. Guidance document for preparing water sampling and analysis plans for UMTRA Project sites. Revision 1

    International Nuclear Information System (INIS)

    1995-09-01

    A water sampling and analysis plan (WSAP) is prepared for each Uranium Mill Tailings Remedial Action (UMTRA) Project site to provide the rationale for routine ground water sampling at disposal sites and former processing sites. The WSAP identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequency for the routine ground water monitoring stations at each site. This guidance document has been prepared by the Technical Assistance Contractor (TAC) for the US Department of Energy (DOE). Its purpose is to provide a consistent technical approach for sampling and monitoring activities performed under the WSAP and to provide a consistent format for the WSAP documents. It is designed for use by the TAC in preparing WSAPs and by the DOE, US Nuclear Regulatory Commission, state and tribal agencies, other regulatory agencies, and the public in evaluating the content of WSAPS

  6. Forensic Analysis of Digital Image Tampering

    Science.gov (United States)

    2004-12-01

    analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...used to embed the hidden watermark is Steganography Software F5 version 11+ discussed in Section 2.2. Original JPEG Image – 580 x 435 – 17.4

  7. Corporate social responsibility and access to policy élites: an analysis of tobacco industry documents.

    Science.gov (United States)

    Fooks, Gary J; Gilmore, Anna B; Smith, Katherine E; Collin, Jeff; Holden, Chris; Lee, Kelley

    2011-08-01

    Recent attempts by large tobacco companies to represent themselves as socially responsible have been widely dismissed as image management. Existing research supports such claims by pointing to the failings and misleading nature of corporate social responsibility (CSR) initiatives. However, few studies have focused in depth on what tobacco companies hoped to achieve through CSR or reflected on the extent to which these ambitions have been realised. Iterative searching relating to CSR strategies was undertaken of internal British American Tobacco (BAT) documents, released through litigation in the US. Relevant documents (764) were indexed and qualitatively analysed. In the past decade, BAT has actively developed a wide-ranging CSR programme. Company documents indicate that one of the key aims of this programme was to help the company secure access to policymakers and, thereby, increase the company's chances of influencing policy decisions. Taking the UK as a case study, this paper demonstrates the way in which CSR can be used to renew and maintain dialogue with policymakers, even in ostensibly unreceptive political contexts. In practice, the impact of this political use of CSR is likely to be context specific; depending on factors such as policy élites' understanding of the credibility of companies as a reliable source of information. The findings suggest that tobacco company CSR strategies can enable access to and dialogue with policymakers and provide opportunities for issue definition. CSR should therefore be seen as a form of corporate political activity. This underlines the need for broad implementation of Article 5.3 of the Framework Convention on Tobacco Control. Measures are needed to ensure transparency of interactions between all parts of government and the tobacco industry and for policy makers to be made more aware of what companies hope to achieve through CSR.

  8. Corporate social responsibility and access to policy élites: an analysis of tobacco industry documents.

    Directory of Open Access Journals (Sweden)

    Gary J Fooks

    2011-08-01

    Full Text Available Recent attempts by large tobacco companies to represent themselves as socially responsible have been widely dismissed as image management. Existing research supports such claims by pointing to the failings and misleading nature of corporate social responsibility (CSR initiatives. However, few studies have focused in depth on what tobacco companies hoped to achieve through CSR or reflected on the extent to which these ambitions have been realised.Iterative searching relating to CSR strategies was undertaken of internal British American Tobacco (BAT documents, released through litigation in the US. Relevant documents (764 were indexed and qualitatively analysed. In the past decade, BAT has actively developed a wide-ranging CSR programme. Company documents indicate that one of the key aims of this programme was to help the company secure access to policymakers and, thereby, increase the company's chances of influencing policy decisions. Taking the UK as a case study, this paper demonstrates the way in which CSR can be used to renew and maintain dialogue with policymakers, even in ostensibly unreceptive political contexts. In practice, the impact of this political use of CSR is likely to be context specific; depending on factors such as policy élites' understanding of the credibility of companies as a reliable source of information.The findings suggest that tobacco company CSR strategies can enable access to and dialogue with policymakers and provide opportunities for issue definition. CSR should therefore be seen as a form of corporate political activity. This underlines the need for broad implementation of Article 5.3 of the Framework Convention on Tobacco Control. Measures are needed to ensure transparency of interactions between all parts of government and the tobacco industry and for policy makers to be made more aware of what companies hope to achieve through CSR.

  9. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  10. Application programming interface document for the modernized Transient Reactor Analysis Code (TRAC-M)

    International Nuclear Information System (INIS)

    Mahaffy, J.; Boyack, B.E.; Steinke, R.G.

    1998-05-01

    The objective of this document is to ease the task of adding new system components to the Transient Reactor Analysis Code (TRAC) or altering old ones. Sufficient information is provided to permit replacement or modification of physical models and correlations. Within TRAC, information is passed at two levels. At the upper level, information is passed by system-wide and component-specific data modules at and above the level of component subroutines. At the lower level, information is passed through a combination of module-based data structures and argument lists. This document describes the basic mechanics involved in the flow of information within the code. The discussion of interfaces in the body of this document has been kept to a general level to highlight key considerations. The appendices cover instructions for obtaining a detailed list of variables used to communicate in each subprogram, definitions and locations of key variables, and proposed improvements to intercomponent interfaces that are not available in the first level of code modernization

  11. Internationalization Impact on PhD Training Policy in Russia: Insights from The Comparative Document Analysis

    Directory of Open Access Journals (Sweden)

    Oksana Chigisheva

    2017-09-01

    Full Text Available The relevance of the study is due to the need for an objective picture of the Russian third level tertiary education transformation driven by internationalization issues and global trends in education. The article provides an analytical comparative review of the official documents related to the main phases of education reform in Russia and focuses on the system of PhD training which has undergone significant reorganization in recent years. A series of alterations introduced into the theory and practice of postgraduate education in Russia are traced in regulatory documents and interpreted in terms of growing internationalization demand. Possible implications for further development of the research human potential in Russia are being discussed. The method of comparative document analysis produces the best possible insight into the subject. The findings of the study contribute to the understanding of current challenges facing the system of doctoral studies in Russia and lead to certain conclusions on the transformation of educational policy in relation to PhD training under the influence of internationalization agenda.

  12. Breast cancer histopathology image analysis : a review

    NARCIS (Netherlands)

    Veta, M.; Pluim, J.P.W.; Diest, van P.J.; Viergever, M.A.

    2014-01-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology

  13. Multiplicative calculus in biomedical image analysis

    NARCIS (Netherlands)

    Florack, L.M.J.; Assen, van H.C.

    2011-01-01

    We advocate the use of an alternative calculus in biomedical image analysis, known as multiplicative (a.k.a. non-Newtonian) calculus. It provides a natural framework in problems in which positive images or positive definite matrix fields and positivity preserving operators are of interest. Indeed,

  14. Image analysis in x-ray cinefluorography

    Energy Technology Data Exchange (ETDEWEB)

    Ikuse, J; Yasuhara, H; Sugimoto, H [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    1979-02-01

    For the cinefluorographic image in the cardiovascular diagnostic system, the image quality is evaluated by means of MTF (Modulation Transfer Function), and object contrast by introducing the concept of x-ray spectrum analysis. On the basis of these results, further investigation is made of optimum X-ray exposure factors set for cinefluorography and the cardiovascular diagnostic system.

  15. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  16. Analysis of the recent international documents toward inclusive education of children with disabilities

    Directory of Open Access Journals (Sweden)

    Tabatabaie Minou

    2012-01-01

    Full Text Available Analysis of various international documents clearly suggests that international documents have provided a significantmotivation to efforts undertaken at the national level about education of children with disabilities. UN Convention on theRights of the Child imposed a requirement for radical changes to traditional approaches to provision made for children withdisabilities. One year later, the 1990 World Conference on Education for all focused attention on a much broader range ofchildren with disabilities who may be excluded from or marginalized within education systems. Its development has involveda series of stages during which education systems have explored different ways of responding to children with disabilities andothers who experience difficulties in learning. This conference declared the inclusive education is regarded as the only meansto achieve the goal of "Education for All". This trend was reaffirmed by next international documents. And finally, accordingto the article 24 of the Convention on the rights of persons with disabilities, disabled persons should be able to accessgeneral tertiary education, vocational training, adult education and lifelong learning without discrimination and on an equalbasis with others through reasonable accommodation of their disabilities. All of these documents played an important role inbringing the attention on to children with disabilities, especially on education as a vehicle for integration and empowerment.This research examines the new international trends occurring regarding the education of children with disabilities and finallyresults that the new trends show a movement from special education to inclusive education and moving from seclusion toinclusion and provide that solutions must focus on prevention, cure and steps to make these children as normal as possible.In this regard, States must ensure the full realization of all human rights and fundamental freedoms for all disabled people,on an

  17. ITER final design report, cost review and safety analysis (FDR) and relevant documents

    International Nuclear Information System (INIS)

    1999-01-01

    This volume contains the fourth major milestone report and documents associated with its acceptance, review and approval. This ITER Final Design Report, Cost Review and Safety Analysis was presented to the ITER Council at its 13th meeting in February 1998 and was approved at its extraordinary meeting on 25 June 1998. The contents include an outline of the ITER objectives, the ITER parameters and design overview as well as operating scenarios and plasma performance. Furthermore, design features, safety and environmental characteristics and schedule and cost estimates are given

  18. Facial Image Analysis in Anthropology: A Review

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 49, č. 2 (2011), s. 141-153 ISSN 0323-1119 Institutional support: RVO:67985807 Keywords : face * computer-assisted methods * template matching * geometric morphopetrics * robust image analysis Subject RIV: IN - Informatics, Computer Science

  19. Optimization of shearography image quality analysis

    International Nuclear Information System (INIS)

    Rafhayudi Jamro

    2005-01-01

    Shearography is an optical technique based on speckle pattern to measure the deformation of the object surface in which the fringe pattern is obtained through the correlation analysis from the speckle pattern. Analysis of fringe pattern for engineering application is limited for qualitative measurement. Therefore, for further analysis that lead to qualitative data, series of image processing mechanism are involved. In this paper, the fringe pattern for qualitative analysis is discussed. In principal field of applications is qualitative non-destructive testing such as detecting discontinuity, defect in the material structure, locating fatigue zones and etc and all these required image processing application. In order to performed image optimisation successfully, the noise in the fringe pattern must be minimised and the fringe pattern itself must be maximise. This can be achieved by applying a filtering method with a kernel size ranging from 2 X 2 to 7 X 7 pixels size and also applying equalizer in the image processing. (Author)

  20. Structural analysis in medical imaging

    International Nuclear Information System (INIS)

    Dellepiane, S.; Serpico, S.B.; Venzano, L.; Vernazza, G.

    1987-01-01

    The conventional techniques in Pattern Recognition (PR) have been greatly improved by the introduction of Artificial Intelligence (AI) approaches, in particular for knowledge representation, inference mechanism and control structure. The purpose of this paper is to describe an image understanding system, based on the integrated approach (AI - PR), developed in the author's Department to interpret Nuclear Magnetic Resonance (NMR) images. The system is characterized by a heterarchical control structure and a blackboard model for the global data-base. The major aspects of the system are pointed out, with particular reference to segmentation, knowledge representation and error recovery (backtracking). The eye slices obtained in the case of two patients have been analyzed and the related results are discussed

  1. Malware Analysis Using Visualized Image Matrices

    Directory of Open Access Journals (Sweden)

    KyoungSoo Han

    2014-01-01

    Full Text Available This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  2. Malware analysis using visualized image matrices.

    Science.gov (United States)

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  3. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  4. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  5. FROM DOCUMENTATION IMAGES TO RESTAURATION SUPPORT TOOLS: A PATH FOLLOWING THE NEPTUNE FOUNTAIN IN BOLOGNA DESIGN PROCESS

    Directory of Open Access Journals (Sweden)

    F. I. Apollonio

    2017-05-01

    Full Text Available The sixteenth-century Fountain of Neptune is one of Bologna’s most renowned landmarks. During the recent restoration activities of the monumental sculpture group, consisting in precious marbles and highly refined bronzes with water jets, a photographic campaign has been carried out exclusively for documentation purposes of the current state of preservation of the complex. Nevertheless, the highquality imagery was used for a different use, namely to create a 3D digital model accurate in shape and color by means of automated photogrammetric techniques and a robust customized pipeline. This 3D model was used as basic tool to support many and different activities of the restoration site. The paper describes the 3D model construction technique used and the most important applications in which it was used as support tool for restoration: (i reliable documentation of the actual state; (ii surface cleaning analysis; (iii new water system and jets; (iv new lighting design simulation; (v support for preliminary analysis and projectual studies related to hardly accessible areas; (vi structural analysis; (vii base for filling gaps or missing elements through 3D printing; (viii high-quality visualization and rendering and (ix support for data modelling and semantic-based diagrams.

  6. Data development technical support document for the aircraft crash risk analysis methodology (ACRAM) standard

    International Nuclear Information System (INIS)

    Kimura, C.Y.; Glaser, R.E.; Mensing, R.W.; Lin, T.; Haley, T.A.; Barto, A.B.; Stutzke, M.A.

    1996-01-01

    The Aircraft Crash Risk Analysis Methodology (ACRAM) Panel has been formed by the US Department of Energy Office of Defense Programs (DOE/DP) for the purpose of developing a standard methodology for determining the risk from aircraft crashes onto DOE ground facilities. In order to accomplish this goal, the ACRAM panel has been divided into four teams, the data development team, the model evaluation team, the structural analysis team, and the consequence team. Each team, consisting of at least one member of the ACRAM plus additional DOE and DOE contractor personnel, specializes in the development of the methodology assigned to that team. This report documents the work performed by the data development team and provides the technical basis for the data used by the ACRAM Standard for determining the aircraft crash frequency. This report should be used to provide the generic data needed to calculate the aircraft crash frequency into the facility under consideration as part of the process for determining the aircraft crash risk to ground facilities as given by the DOE Standard Aircraft Crash Risk Assessment Methodology (ACRAM). Some broad guidance is presented on how to obtain the needed site-specific and facility specific data but this data is not provided by this document

  7. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  8. IMPLEMENTING CHANGES TO AN APPROVED AND IN-USE DOCUMENTED SAFETY ANALYSIS

    International Nuclear Information System (INIS)

    KING JP

    2008-01-01

    The Plutonium Finishing Plant (PFP) has refined a process to ensure a comprehensive and complete DSA/TSR change implementation. Successful Nuclear Facility Safety Basis implementation is essential to avoid creating a Potential Inadequacy in Safety Analysis (PISA) situation, or implementing a facility into a non-compliance that can result in a TSR violation. Once past initial implementation, additional changes to Documented Safety Analysis (DSA) and Technical Safety Requirements (TSRs) are often needed due to needed requirement clarifications, operating experience indicating that Conditions/Required Actions/Surveillance Requirements could be improved, changes in facility conditions, or changes in facility mission etc. An effective change implementation process is essential to ensuring compliance with 10 CFR 830.202(a), 'The contractor responsible for a hazard category 1,2, or 3 DOE nuclear facility must establish and maintain the safety basis for the facility'

  9. Breast cancer histopathology image analysis: a review.

    Science.gov (United States)

    Veta, Mitko; Pluim, Josien P W; van Diest, Paul J; Viergever, Max A

    2014-05-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology slide digitization, and which aim at replacing the optical microscope as the primary tool used by pathologist. Breast cancer is the most prevalent form of cancers among women, and image analysis methods that target this disease have a huge potential to reduce the workload in a typical pathology lab and to improve the quality of the interpretation. This paper is meant as an introduction for nonexperts. It starts with an overview of the tissue preparation, staining and slide digitization processes followed by a discussion of the different image processing techniques and applications, ranging from analysis of tissue staining to computer-aided diagnosis, and prognosis of breast cancer patients.

  10. Some developments in multivariate image analysis

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    be up to several million. The main MIA tool for exploratory analysis is score density plot – all pixels are projected into principal component space and on the corresponding scores plots are colorized according to their density (how many pixels are crowded in the unit area of the plot). Looking...... for and analyzing patterns on these plots and the original image allow to do interactive analysis, to get some hidden information, build a supervised classification model, and much more. In the present work several alternative methods to original principal component analysis (PCA) for building the projection......Multivariate image analysis (MIA), one of the successful chemometric applications, now is used widely in different areas of science and industry. Introduced in late 80s it has became very popular with hyperspectral imaging, where MIA is one of the most efficient tools for exploratory analysis...

  11. Public versus internal conceptions of addiction: An analysis of internal Philip Morris documents.

    Directory of Open Access Journals (Sweden)

    Jesse Elias

    2018-05-01

    Full Text Available Tobacco addiction is a complex, multicomponent phenomenon stemming from nicotine's pharmacology and the user's biology, psychology, sociology, and environment. After decades of public denial, the tobacco industry now agrees with public health authorities that nicotine is addictive. In 2000, Philip Morris became the first major tobacco company to admit nicotine's addictiveness. Evolving definitions of addiction have historically affected subsequent policymaking. This article examines how Philip Morris internally conceptualized addiction immediately before and after this announcement.We analyzed previously secret, internal Philip Morris documents made available as a result of litigation against the tobacco industry. We compared these documents to public company statements and found that Philip Morris's move from public denial to public affirmation of nicotine's addictiveness coincided with pressure on the industry from poor public approval ratings, the Master Settlement Agreement (MSA, the United States government's filing of the Racketeer Influenced and Corrupt Organizations (RICO suit, and the Institute of Medicine's (IoM's endorsement of potentially reduced risk products. Philip Morris continued to research the causes of addiction through the 2000s in order to create successful potentially reduced exposure products (PREPs. While Philip Morris's public statements reinforce the idea that nicotine's pharmacology principally drives smoking addiction, company scientists framed addiction as the result of interconnected biological, social, psychological, and environmental determinants, with nicotine as but one component. Due to the fragmentary nature of the industry document database, we may have missed relevant information that could have affected our analysis.Philip Morris's research suggests that tobacco industry activity influences addiction treatment outcomes. Beyond nicotine's pharmacology, the industry's continued aggressive advertising

  12. Evaluation of Rapid Stain IDentification (RSID™ Reader System for Analysis and Documentation of RSID™ Tests

    Directory of Open Access Journals (Sweden)

    Pravatchai W. Boonlayangoor

    2013-08-01

    Full Text Available The ability to detect the presence of body fluids is a crucial first step in documenting and processing forensic evidence. The Rapid Stain IDentification (RSID™ tests for blood, saliva, semen and urine are lateral flow immunochromatographic strip tests specifically designed for forensic use. Like most lateral flow strips, the membrane components of the test are enclosed in a molded plastic cassette with a sample well and an observation window. No specialized equipment is required to use these tests or to score the results seen in the observation window; however, the utility of these tests can be enhanced if an electronic record of the test results can be obtained, preferably by a small hand-held device that could be used in the field under low light conditions. Such a device should also be able to “read” the lateral flow strips and accurately record the results of the test as either positive, i.e., the body fluid was detected, or negative, i.e., the body fluid was not detected. Here we describe the RSID™ Reader System—a ruggedized strip test reader unit that allows analysis and documentation of RSID™ lateral flow strip tests using pre-configured settings, and show that the RSID™ Reader can accurately and reproducibly report and record correct results from RSID™ blood, saliva, semen, and urine tests.

  13. Upgraded safety analysis document including operations policies, operational safety limits and policy changes. Revision 2

    International Nuclear Information System (INIS)

    Batchelor, K.

    1996-03-01

    The National Synchrotron Light Source Safety Analysis Reports (1), (2), (3), BNL reports number-sign 51584, number-sign 52205 and number-sign 52205 (addendum) describe the basic Environmental Safety and Health issues associated with the department's operations. They include the operating envelope for the Storage Rings and also the rest of the facility. These documents contain the operational limits as perceived prior or during construction of the facility, much of which still are appropriate for current operations. However, as the machine has matured, the experimental program has grown in size, requiring more supervision in that area. Also, machine studies have either verified or modified knowledge of beam loss modes and/or radiation loss patterns around the facility. This document is written to allow for these changes in procedure or standards resulting from their current mode of operation and shall be used in conjunction with the above reports. These changes have been reviewed by NSLS and BNL ES and H committee and approved by BNL management

  14. Applying a sociolinguistic model to the analysis of informed consent documents.

    Science.gov (United States)

    Granero-Molina, José; Fernández-Sola, Cayetano; Aguilera-Manrique, Gabriel

    2009-11-01

    Information on the risks and benefits related to surgical procedures is essential for patients in order to obtain their informed consent. Some disciplines, such as sociolinguistics, offer insights that are helpful for patient-professional communication in both written and oral consent. Communication difficulties become more acute when patients make decisions through an informed consent document because they may sign this with a lack of understanding and information, and consequently feel deprived of their freedom to make their choice about different treatments or surgery. This article discusses findings from documentary analysis using the sociolinguistic SPEAKING model, which was applied to the general and specific informed consent documents required for laparoscopic surgery of the bile duct at Torrecárdenas Hospital, Almería, Spain. The objective of this procedure was to identify flaws when information was provided, together with its readability, its voluntary basis, and patients' consent. The results suggest potential linguistic communication difficulties, different languages being used, cultural clashes, asymmetry of communication between professionals and patients, assignment of rights on the part of patients, and overprotection of professionals and institutions.

  15. Reading the Music and Understanding the Therapeutic Process: Documentation, Analysis and Interpretation of Improvisational Music Therapy

    Directory of Open Access Journals (Sweden)

    Deborah Parker

    2011-01-01

    Full Text Available This article is concerned primarily with the challenges of presenting clinical material from improvisational music therapy. My aim is to propose a model for the transcription of music therapy material, or “musicotherapeutic objects” (comparable to Bion’s “psychoanalytic objects”, which preserves the integrated “gestalt” of the musical experience as far as possible, whilst also supporting detailed analysis and interpretation. Unwilling to resort to use of visual documentation, but aware that many important indicators in music therapy are non-sounding, I propose a richly annotated score, where traditional music notation is integrated with graphic and verbal additions, in order to document non-sounding events. This model is illustrated within the context of a clinical case with a high functioning autistic woman. The four transcriptions, together with the original audio tracks, present significant moments during the course of music therapy, attesting to the development of the dyadic relationship, with reference to John Bowlby’s concept of a “secure base” as the most appropriate dynamic environment for therapy.

  16. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  17. Guidelines for economic analysis of pharmaceutical products: a draft document for Ontario and Canada.

    Science.gov (United States)

    Detsky, A S

    1993-05-01

    In Canada, provincial formulary review committees consider the effectiveness, safety, and cost of products when they derive advice for each Minister of Health. This article offers a draft set of guidelines for pharmaceutical manufacturers making submissions which include economic information, moving beyond a simple presentation of the unit price of the pharmaceutical product (e.g. price per day or course of therapy) and comparison to similar prices for alternative products. A full economic analysis compares all relevant costs and clinical outcomes of the new product with alternate therapeutic strategies for treating patients with a particular condition. The perspective of the decision maker must be clearly identified. The quality of the evidence supporting estimates of the variables incorporated in the analysis should be evaluated. Sensitivity analyses are used to assess the robustness of the qualitative conclusions. Reviewers will examine the answers to a set of 19 questions. Manufacturers can use these questions as a worksheet for preparation of an economic analysis to be incorporated in a submission. These guidelines are intended to be a starting point for further refinement, and discussion with health economists in industry and academia. Considerable flexibility will be used in reviewing documentation supporting economic analysis. Those preparing submissions should be encouraged to experiment with various approaches as part of the general development of this field and to engage provincial review committees in ongoing discussions.

  18. Development of Image Analysis Software of MAXI

    Science.gov (United States)

    Eguchi, S.; Ueda, Y.; Hiroi, K.; Isobe, N.; Sugizaki, M.; Suzuki, M.; Tomida, H.; Maxi Team

    2010-12-01

    Monitor of All-sky X-ray Image (MAXI) is an X-ray all-sky monitor, attached to the Japanese experiment module Kibo on the International Space Station. The main scientific goals of the MAXI mission include the discovery of X-ray novae followed by prompt alerts to the community (Negoro et al., in this conference), and production of X-ray all-sky maps and new source catalogs with unprecedented sensitivities. To extract the best capabilities of the MAXI mission, we are working on the development of detailed image analysis tools. We utilize maximum likelihood fitting to a projected sky image, where we take account of the complicated detector responses, such as the background and point spread functions (PSFs). The modeling of PSFs, which strongly depend on the orbit and attitude of MAXI, is a key element in the image analysis. In this paper, we present the status of our software development.

  19. Digital image analysis of NDT radiographs

    International Nuclear Information System (INIS)

    Graeme, W.A. Jr.; Eizember, A.C.; Douglass, J.

    1989-01-01

    Prior to the introduction of Charge Coupled Device (CCD) detectors the majority of image analysis performed on NDT radiographic images was done visually in the analog domain. While some film digitization was being performed, the process was often unable to capture all the usable information on the radiograph or was too time consuming. CCD technology now provides a method to digitize radiographic film images without losing the useful information captured in the original radiograph in a timely process. Incorporating that technology into a complete digital radiographic workstation allows analog radiographic information to be processed, providing additional information to the radiographer. Once in the digital domain, that data can be stored, and fused with radioscopic and other forms of digital data. The result is more productive analysis and management of radiographic inspection data. The principal function of the NDT Scan IV digital radiography system is the digitization, enhancement and storage of radiographic images

  20. Market Analysis and Consumer Impacts Source Document. Part III. Consumer Behavior and Attitudes Toward Fuel Efficient Vehicles

    Science.gov (United States)

    1980-12-01

    This source document on motor vehicle market analysis and consumer impacts consists of three parts. Part III consists of studies and reviews on: consumer awareness of fuel efficiency issues; consumer acceptance of fuel efficient vehicles; car size ch...

  1. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  2. Chromatic Image Analysis For Quantitative Thermal Mapping

    Science.gov (United States)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  3. A document analysis of drowning prevention education resources in the United States.

    Science.gov (United States)

    Katchmarchi, Adam Bradley; Taliaferro, Andrea R; Kipfer, Hannah Joy

    2018-03-01

    There have been long-standing calls to better educate the public at large on risks of drowning; yet limited evaluation has taken place on current resources in circulation. The purpose of this qualitative research is to develop an understanding of the content in currently circulated drowning prevention resources in the United States. Data points (n = 451) consisting of specific content within 25 different drowning prevention educational resources were analyzed using document analysis methods; a grounded theory approach was employed to allow for categorical development and indexing of the data. Results revealed six emerging categories, including safety precautions (n = 152), supervision (n = 109), preventing access (n = 57), safety equipment (n = 46), emergency procedures (n = 46), and aquatic education (n = 41). Results provide an initial insight into the composition of drowning prevention resources in the United States and provide a foundation for future research.

  4. The role of business agreements in defining textbook affordability and digital materials: A document analysis

    Directory of Open Access Journals (Sweden)

    John Raible

    2015-12-01

    Full Text Available Adopting digital materials such as eTextbooks and e-coursepacks is a potential strategy to address textbook affordability in the United States. However, university business relationships with bookstore vendors implicitly structure which instructional resources are available and in what manner. In this study, a document analysis was conducted on the bookstore contracts for the universities included in the State University System of Florida. Namely, issues of textbook affordability, digital material terminology and seller exclusivity were investigated. It was found that textbook affordability was generally conceived in terms of print rental textbooks and buyback programs, and that eTextbooks were priced higher than print textbooks (25% to 30% markup. Implications and recommendations for change are shared. DOI: 10.18870/hlrc.v5i4.284

  5. Laue image analysis. Pt. 2

    International Nuclear Information System (INIS)

    Greenhough, T.J.; Shrive, A.K.

    1994-01-01

    Many Laue diffraction patterns from crystals of particular biological or chemical interest are of insufficient quality for their analysis to be feasible. In many cases, this is because of pronounced streaking of the spots owing to either large mosaic spread or disorder introduced during reactions in the crystal. Methods for the analysis of exposures exhibiting radial or near-radial streaking are described, along with their application in Laue diffraction studies of form-II crystals of Met-tRNA synthetase and a photosynthetic reaction centre from Rhodobacter sphaeroides. In both cases, variable elliptical radial masking has led to significant improvements in data quality and quantity and exposures that previously were too streaked to process may now be analysed. These masks can also provide circular profiles as a special case for processing high-quality Laue exposures and spatial-overlap deconvolution may be performed using the elliptical or circular masks. (orig.)

  6. Multisource Images Analysis Using Collaborative Clustering

    Directory of Open Access Journals (Sweden)

    Pierre Gançarski

    2008-04-01

    Full Text Available The development of very high-resolution (VHR satellite imagery has produced a huge amount of data. The multiplication of satellites which embed different types of sensors provides a lot of heterogeneous images. Consequently, the image analyst has often many different images available, representing the same area of the Earth surface. These images can be from different dates, produced by different sensors, or even at different resolutions. The lack of machine learning tools using all these representations in an overall process constraints to a sequential analysis of these various images. In order to use all the information available simultaneously, we propose a framework where different algorithms can use different views of the scene. Each one works on a different remotely sensed image and, thus, produces different and useful information. These algorithms work together in a collaborative way through an automatic and mutual refinement of their results, so that all the results have almost the same number of clusters, which are statistically similar. Finally, a unique result is produced, representing a consensus among the information obtained by each clustering method on its own image. The unified result and the complementarity of the single results (i.e., the agreement between the clustering methods as well as the disagreement lead to a better understanding of the scene. The experiments carried out on multispectral remote sensing images have shown that this method is efficient to extract relevant information and to improve the scene understanding.

  7. Canister storage building (CSB) safety analysis report phase 3: Safety analysis documentation supporting CSB construction

    International Nuclear Information System (INIS)

    Garvin, L.J.

    1997-01-01

    The Canister Storage Building (CSB) will be constructed in the 200 East Area of the U.S. Department of Energy (DOE) Hanford Site. The CSB will be used to stage and store spent nuclear fuel (SNF) removed from the Hanford Site K Basins. The objective of this chapter is to describe the characteristics of the site on which the CSB will be located. This description will support the hazard analysis and accident analyses in Chapter 3.0. The purpose of this report is to provide an evaluation of the CSB design criteria, the design's compliance with the applicable criteria, and the basis for authorization to proceed with construction of the CSB

  8. Risk D and D Rapid Prototype: Scenario Documentation and Analysis Tool

    International Nuclear Information System (INIS)

    Unwin, Stephen D.; Seiple, Timothy E.

    2009-01-01

    Report describes process and methodology associated with a rapid prototype tool for integrating project risk analysis and health and safety risk analysis for decontamination and decommissioning projects. The objective of the Decontamination and Decommissioning (D and D) Risk Management Evaluation and Work Sequencing Standardization Project under DOE EM-23 is to recommend or develop practical risk-management tools for decommissioning of nuclear facilities. PNNL has responsibility under this project for recommending or developing computer-based tools that facilitate the evaluation of risks in order to optimize the sequencing of D and D work. PNNL's approach is to adapt, augment, and integrate existing resources rather than to develop a new suite of tools. Methods for the evaluation of H and S risks associated with work in potentially hazardous environments are well-established. Several approaches exist which, collectively, are referred to as process hazard analysis (PHA). A PHA generally involves the systematic identification of accidents, exposures, and other adverse events associated with a given process or work flow. This identification process is usually achieved in a brainstorming environment or by other means of eliciting informed opinion. The likelihoods of adverse events (scenarios) and their associated consequence severities are estimated against pre-defined scales, based on which risk indices are then calculated. A similar process is encoded in various project risk software products that facilitate the quantification of schedule and cost risks associated with adverse scenarios. However, risk models do not generally capture both project risk and H and S risk. The intent of the project reported here is to produce a tool that facilitates the elicitation, characterization, and documentation of both project risk and H and S risk based on defined sequences of D and D activities. By considering alternative D and D sequences, comparison of the predicted risks can

  9. The Role of Development Agencies in Touristic Branding of Cities, A Document Analysis on Regional Plans

    Directory of Open Access Journals (Sweden)

    Emrah ÖZKUL

    2012-12-01

    Full Text Available The objective of present research is to determine the role of development agencies in which the branding of cities in the region. At research, the role of development agencies; identification of unknown tourist values, determination and improving of deficiencies, opportunities, were investigated in accordance with the agency's goals and objectives. To achieve this goal used in document analysis from qualitative research methods and Regional Plans were investigated which was published by the Development Agencies. The data obtained were subjected to descriptive analysis, in the case of some unidentified concepts by going in-depth content analysis. Despite all the advantages of having Turkey, made enough promotion of national and international level many regions in Turkey and so the tourism industry has led to overshadowed by the industrial and agricultural sectors. For this reason, development agencies determining the values of regional tourism have undertaken to task of changing perceptions of tourist consumers with their targeted projects on behalf of perform the city branding. Thus, was concluded that cities could become a center of attraction and the brand both investors and visitors.

  10. Final Safety Analysis Document for Building 693 Chemical Waste Storage Building at Lawrence Livermore National Laboratory

    International Nuclear Information System (INIS)

    Salazar, R.J.; Lane, S.

    1992-02-01

    This Safety Analysis Document (SAD) for the Lawrence Livermore National Laboratory (LLNL) Building 693, Chemical Waste Storage Building (desipated as Building 693 Container Storage Unit in the Laboratory's RCRA Part B permit application), provides the necessary information and analyses to conclude that Building 693 can be operated at low risk without unduly endangering the safety of the building operating personnel or adversely affecting the public or the environment. This Building 693 SAD consists of eight sections and supporting appendices. Section 1 presents a summary of the facility designs and operations and Section 2 summarizes the safety analysis method and results. Section 3 describes the site, the facility desip, operations and management structure. Sections 4 and 5 present the safety analysis and operational safety requirements (OSRs). Section 6 reviews Hazardous Waste Management's (HWM) Quality Assurance (QA) program. Section 7 lists the references and background material used in the preparation of this report Section 8 lists acronyms, abbreviations and symbols. Appendices contain supporting analyses, definitions, and descriptions that are referenced in the body of this report

  11. Canister storage building (CSB) safety analysis report phase 3: Safety analysis documentation supporting CSB construction

    Energy Technology Data Exchange (ETDEWEB)

    Garvin, L.J.

    1997-04-28

    The Canister Storage Building (CSB) will be constructed in the 200 East Area of the U.S. Department of Energy (DOE) Hanford Site. The CSB will be used to stage and store spent nuclear fuel (SNF) removed from the Hanford Site K Basins. The objective of this chapter is to describe the characteristics of the site on which the CSB will be located. This description will support the hazard analysis and accident analyses in Chapter 3.0. The purpose of this report is to provide an evaluation of the CSB design criteria, the design's compliance with the applicable criteria, and the basis for authorization to proceed with construction of the CSB.

  12. Automatic comic page image understanding based on edge segment analysis

    Science.gov (United States)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  13. Applications Of Binary Image Analysis Techniques

    Science.gov (United States)

    Tropf, H.; Enderle, E.; Kammerer, H. P.

    1983-10-01

    After discussing the conditions where binary image analysis techniques can be used, three new applications of the fast binary image analysis system S.A.M. (Sensorsystem for Automation and Measurement) are reported: (1) The human view direction is measured at TV frame rate while the subject's head is free movable. (2) Industrial parts hanging on a moving conveyor are classified prior to spray painting by robot. (3) In automotive wheel assembly, the eccentricity of the wheel is minimized by turning the tyre relative to the rim in order to balance the eccentricity of the components.

  14. Systematic documentation and analysis of human genetic variation using the microattribution approach

    Science.gov (United States)

    Giardine, Belinda; Borg, Joseph; Higgs, Douglas R.; Peterson, Kenneth R.; Maglott, Donna; Basak, A. Nazli; Clark, Barnaby; Faustino, Paula; Felice, Alex E.; Francina, Alain; Gallivan, Monica V. E.; Georgitsi, Marianthi; Gibbons, Richard J.; Giordano, Piero C.; Harteveld, Cornelis L.; Joly, Philippe; Kanavakis, Emmanuel; Kollia, Panagoula; Menzel, Stephan; Miller, Webb; Moradkhani, Kamran; Old, John; Papachatzopoulou, Adamantia; Papadakis, Manoussos N.; Papadopoulos, Petros; Pavlovic, Sonja; Philipsen, Sjaak; Radmilovic, Milena; Riemer, Cathy; Schrijver, Iris; Stojiljkovic, Maja; Thein, Swee Lay; Traeger-Synodinos, Jan; Tully, Ray; Wada, Takahito; Waye, John; Wiemann, Claudia; Zukic, Branka; Chui, David H. K.; Wajcman, Henri; Hardison, Ross C.; Patrinos, George P.

    2013-01-01

    We developed a series of interrelated locus-specific databases to store all published and unpublished genetic variation related to these disorders, and then implemented microattribution to encourage submission of unpublished observations of genetic variation to these public repositories 1. A total of 1,941 unique genetic variants in 37 genes, encoding globins (HBA2, HBA1, HBG2, HBG1, HBD, HBB) and other erythroid proteins (ALOX5AP, AQP9, ARG2, ASS1, ATRX, BCL11A, CNTNAP2, CSNK2A1, EPAS1, ERCC2, FLT1, GATA1, GPM6B, HAO2, HBS1L, KDR, KL, KLF1, MAP2K1, MAP3K5, MAP3K7, MYB, NOS1, NOS2, NOS3, NOX3, NUP133, PDE7B, SMAD3, SMAD6, and TOX) are currently documented in these databases with reciprocal attribution of microcitations to data contributors. Our project provides the first example of implementing microattribution to incentivise submission of all known genetic variation in a defined system. It has demonstrably increased the reporting of human variants and now provides a comprehensive online resource for systematically describing human genetic variation in the globin genes and other genes contributing to hemoglobinopathies and thalassemias. The large repository of previously reported data, together with more recent data, acquired by microattribution, demonstrates how the comprehensive documentation of human variation will provide key insights into normal biological processes and how these are perturbed in human genetic disease. Using the microattribution process set out here, datasets which took decades to accumulate for the globin genes could be assembled rapidly for other genes and disease systems. The principles established here for the globin gene system will serve as a model for other systems and the analysis of other common and/or complex human genetic diseases. PMID:21423179

  15. Analysis and decision document in support of acquisition of steam supply for the Hanford 200 Area

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D.R.; Daellenbach, K.K.; Hendrickson, P.L.; Kavanaugh, D.C.; Reilly, R.W.; Shankle, D.L.; Smith, S.A.; Weakley, S.A.; Williams, T.A. (Pacific Northwest Lab., Richland, WA (United States)); Grant, T.F. (Battelle Human Affairs Research Center, Seattle, WA (United States))

    1992-02-01

    The US Department of Energy (DOE) is now evaluating its facility requirements in support of its cleanup mission at Hanford. One of the early findings is that the 200-Area steam plants, constructed in 1943, will not meet future space heating and process needs. Because the 200 Area will serve as the primary area for waste treatment and long-term storage, a reliable steam supply is a critical element of Hanford operations. This Analysis and Decision Document (ADD) is a preliminary review of the steam supply options available to the DOE. The ADD contains a comprehensive evaluation of the two major acquisition options: line-term versus privatization. It addresses the life-cycle costs associated with each alternative, as well as factors such as contracting requirements and the impact of market, safety, security, and regulatory issues. Specifically, this ADD documents current and future steam requirements for the 200 Area, describes alternatives available to DOE for meeting these requirements, and compares the alternatives across a number of decision criteria, including life-cycle cost. DOE has currently limited the ADD evaluation alternatives to replacing central steam plants rather than expanding the study to include alternative heat sources, such as a distributed network of boilers or heat pumps. Thirteen project alternatives were analyzed in the ADD. One of the alternatives was the rehabilitation of the existing 200-East coal-fired facility. The other twelve alternatives are combinations of (1) coal- or gas-fueled plants, (2) steam-only or cogeneration facilities, (3) primary or secondary cogeneration of electricity, and (4) public or private ownership.

  16. Fourier analysis: from cloaking to imaging

    Science.gov (United States)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  17. Fourier analysis: from cloaking to imaging

    International Nuclear Information System (INIS)

    Wu, Kedi; Ping Wang, Guo; Cheng, Qiluan

    2016-01-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers. (review)

  18. Quantitative Image Simulation and Analysis of Nanoparticles

    DEFF Research Database (Denmark)

    Madsen, Jacob; Hansen, Thomas Willum

    Microscopy (HRTEM) has become a routine analysis tool for structural characterization at atomic resolution, and with the recent development of in-situ TEMs, it is now possible to study catalytic nanoparticles under reaction conditions. However, the connection between an experimental image, and the underlying...... physical phenomena or structure is not always straightforward. The aim of this thesis is to use image simulation to better understand observations from HRTEM images. Surface strain is known to be important for the performance of nanoparticles. Using simulation, we estimate of the precision and accuracy...... of strain measurements from TEM images, and investigate the stability of these measurements to microscope parameters. This is followed by our efforts toward simulating metal nanoparticles on a metal-oxide support using the Charge Optimized Many Body (COMB) interatomic potential. The simulated interface...

  19. Hyperspectral Image Analysis of Food Quality

    DEFF Research Database (Denmark)

    Arngren, Morten

    inspection.Near-infrared spectroscopy can address these issues by offering a fast and objectiveanalysis of the food quality. A natural extension to these single spectrumNIR systems is to include image information such that each pixel holds a NIRspectrum. This augmented image information offers several......Assessing the quality of food is a vital step in any food processing line to ensurethe best food quality and maximum profit for the farmer and food manufacturer.Traditional quality evaluation methods are often destructive and labourintensive procedures relying on wet chemistry or subjective human...... extensions to the analysis offood quality. This dissertation is concerned with hyperspectral image analysisused to assess the quality of single grain kernels. The focus is to highlight thebenefits and challenges of using hyperspectral imaging for food quality presentedin two research directions. Initially...

  20. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  1. Data Analysis Strategies in Medical Imaging.

    Science.gov (United States)

    Parmar, Chintan; Barry, Joseph D; Hosny, Ahmed; Quackenbush, John; Aerts, Hugo Jwl

    2018-03-26

    Radiographic imaging continues to be one of the most effective and clinically useful tools within oncology. Sophistication of artificial intelligence (AI) has allowed for detailed quantification of radiographic characteristics of tissues using predefined engineered algorithms or deep learning methods. Precedents in radiology as well as a wealth of research studies hint at the clinical relevance of these characteristics. However, there are critical challenges associated with the analysis of medical imaging data. While some of these challenges are specific to the imaging field, many others like reproducibility and batch effects are generic and have already been addressed in other quantitative fields such as genomics. Here, we identify these pitfalls and provide recommendations for analysis strategies of medical imaging data including data normalization, development of robust models, and rigorous statistical analyses. Adhering to these recommendations will not only improve analysis quality, but will also enhance precision medicine by allowing better integration of imaging data with other biomedical data sources. Copyright ©2018, American Association for Cancer Research.

  2. Local development strategies for inner areas in Italy. A comparative analysis based on plan documents

    Directory of Open Access Journals (Sweden)

    Gabriella Punziano

    2016-12-01

    Full Text Available Within the huge literature on local development policies produced across different disciplines, comparatively little attention has been paid to an important element as relevant as economic, financial and social capital: the cognitive element, needed in strategic thinking and complexity management, the “collective brain” guiding the decision-making process. In this paper, we investigate what we consider a direct “proxy” for this variable, which is supposed to incorporate that “usable knowledge” assisting those making policy choices: language. Language shapes the way problems are conceived, fixes priorities and delimits the range of strategic options. More specifically, our research question inquires which contextual factors are at stake in local development strategy design. The case studies were chosen among the pilot areas included in the Italian “National Strategy for Inner Areas”. Through a multidimensional content analysis of the plan documents available online, we explored the ways in which development strategies are locally interpreted. The techniques we used allowed us to make a comparative analysis, testing three effects that could have influenced local policy design: a geographical effect, a concept/policy transfer effect, and a framing effect. Broader, interesting reflections were drawn from research findings on the local embedded ability to designing consistent and effective development strategies.

  3. Analysis Relationship Among Descriptor, References and Citation to Contruct the Inherent Structure of Document Collection

    International Nuclear Information System (INIS)

    Hasibuan, Zainal A.; Mustangimah

    2001-01-01

    There are many characteristics can be used to identify a document which cover characteristics of the documents, cited documents, and citing documents This research explored the inherent structure of a document collection as one of main components of information retrieval system. The characteristics examined are: descriptors, references (cited documents), and citations (citing documents). Three independent variables were studied: co-descriptor, bibliographic coupling, and co-citation. A test collection was constructed by searching on a single descriptor i nformation retrieval i n the CD-ROM version of Education Resource Information Clearinghouse (ERIC), covering the period 1981 through 1985. Descriptors were extracted from ERIC; cited and citing documents associated with the test collection were derived from Social Sciences Citation Index (SSCI), covering the period 1981 through 1990. Three hypothesis were tested in this study, that are: (1) the higher the frequency of co-descriptors between documents, the higher the frequencies of their bibliographic coupling and co-citation; (2) the higher the frequency of bibliographic coupling between documents, the higher the frequencies of their co-citation and co-descriptors; and (3) the higher the frequency of co-citation between documents, the higher the frequencies of their co-descriptors and bibliographic coupling. The results showed that all of three hypothesis are supported statistically and there is a significant linear relationship among the observed variables. It is mean that there is a significant relationship among descriptors, references, and citation, so that it can be used to construct the inherent structure of document collection in order to improve information retrieval system performance

  4. Pilot production system cost/benefit analysis: Digital document storage project

    Science.gov (United States)

    1989-01-01

    The Digital Document Storage (DDS)/Pilot Production System (PPS) will provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The DDS/PPS will result in major benefits, such as improved document reproduction quality within a shorter time frame than is currently possible. In addition, the DDS/PPS will provide an important strategic value through the construction of a digital document archive. It is highly recommended that NASA proceed with the DDS Prototype System and a rapid prototyping development methodology in order to validate recent working assumptions upon which the success of the DDS/PPS is dependent.

  5. Multispectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2012-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. The pellets were divided into two groups: one with pellets coated using synthetic astaxanthin in fish oil and the other with pellets coated...

  6. A virtual laboratory for medical image analysis

    NARCIS (Netherlands)

    Olabarriaga, Sílvia D.; Glatard, Tristan; de Boer, Piter T.

    2010-01-01

    This paper presents the design, implementation, and usage of a virtual laboratory for medical image analysis. It is fully based on the Dutch grid, which is part of the Enabling Grids for E-sciencE (EGEE) production infrastructure and driven by the gLite middleware. The adopted service-oriented

  7. Scanning transmission electron microscopy imaging and analysis

    CERN Document Server

    Pennycook, Stephen J

    2011-01-01

    Provides the first comprehensive treatment of the physics and applications of this mainstream technique for imaging and analysis at the atomic level Presents applications of STEM in condensed matter physics, materials science, catalysis, and nanoscience Suitable for graduate students learning microscopy, researchers wishing to utilize STEM, as well as for specialists in other areas of microscopy Edited and written by leading researchers and practitioners

  8. Analysis of the documents about the core envelopment of nuclear reactor at the Laguna Verde U-1 power plant

    International Nuclear Information System (INIS)

    Zamora R, L.; Medina F, A.

    1999-01-01

    The degradation of internal components at BWR type reactors is an important subject to consider in the performance availability of the power plant. The Wuergassen nuclear reactor license was confiscated due to the presence of cracking in the core envelopment. In consequence it is necessary carrying out a detailed study with the purpose to avoid these problems in the future. This report presents a review and analysis of documents and technical information referring to the core envelopment of a BWR/5/6 and the Laguna Verde Unit 1 nuclear reactor in Mexico. In this document are presented design data, documents about fabrication processes, and manufacturing of core envelopment. (Author)

  9. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  10. Frequency domain analysis of knock images

    Science.gov (United States)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  11. Documentation and hydrologic analysis of Hurricane Sandy in New Jersey, October 29–30, 2012

    Science.gov (United States)

    Suro, Thomas P.; Deetz, Anna; Hearn, Paul

    2016-11-17

    higher than the previously recorded period-of-record maximum. A comparison of peak storm-tide elevations to preliminary FEMA Coastal Flood Insurance Study flood elevations indicated that these areas experienced the highest recurrence intervals along the coast of New Jersey. Analysis showed peak storm-tide elevations exceeded the 100-year FEMA flood elevations in many parts of Middlesex, Union, Essex, Hudson, and Bergen Counties, and peak storm-tide elevations at many locations in Monmouth County exceeded the 500-year recurrence interval.A level 1 HAZUS (HAZards United States) analysis was done for the counties in New Jersey affected by flooding to estimate total building stock losses. The aggregated total building stock losses estimated by HAZUS for New Jersey, on the basis of the final inundation verified by USGS high-water marks, was almost $19 billion. A comparison of Hurricane Sandy with historic coastal storms showed that peak storm-tide elevations associated with Hurricane Sandy exceeded most of the previously documented elevations associated with the storms of December 1992, March 1962, September 1960, and September 1944 at many coastal communities in New Jersey. This scientific investigation report was prepared in cooperation with FEMA to document flood processes and flood damages resulting from this storm and to assist in future flood mitigation actions in New Jersey.

  12. Computed image analysis of neutron radiographs

    International Nuclear Information System (INIS)

    Dinca, M.; Anghel, E.; Preda, M.; Pavelescu, M.

    2008-01-01

    Similar with X-radiography, using neutron like penetrating particle, there is in practice a nondestructive technique named neutron radiology. When the registration of information is done on a film with the help of a conversion foil (with high cross section for neutrons) that emits secondary radiation (β,γ) that creates a latent image, the technique is named neutron radiography. A radiographic industrial film that contains the image of the internal structure of an object, obtained by neutron radiography, must be subsequently analyzed to obtain qualitative and quantitative information about the structural integrity of that object. There is possible to do a computed analysis of a film using a facility with next main components: an illuminator for film, a CCD video camera and a computer (PC) with suitable software. The qualitative analysis intends to put in evidence possibly anomalies of the structure due to manufacturing processes or induced by working processes (for example, the irradiation activity in the case of the nuclear fuel). The quantitative determination is based on measurements of some image parameters: dimensions, optical densities. The illuminator has been built specially to perform this application but can be used for simple visual observation. The illuminated area is 9x40 cm. The frame of the system is a comparer of Abbe Carl Zeiss Jena type, which has been adapted to achieve this application. The video camera assures the capture of image that is stored and processed by computer. A special program SIMAG-NG has been developed at INR Pitesti that beside of the program SMTV II of the special acquisition module SM 5010 can analyze the images of a film. The major application of the system was the quantitative analysis of a film that contains the images of some nuclear fuel pins beside a dimensional standard. The system was used to measure the length of the pellets of the TRIGA nuclear fuel. (authors)

  13. Teaching Integrity in Empirical Research: A Protocol for Documenting Data Management and Analysis

    Science.gov (United States)

    Ball, Richard; Medeiros, Norm

    2012-01-01

    This article describes a protocol the authors developed for teaching undergraduates to document their statistical analyses for empirical research projects so that their results are completely reproducible and verifiable. The protocol is guided by the principle that the documentation prepared to accompany an empirical research project should be…

  14. A cost-benefit analysis of document management strategies used at a financial institution in Zimbabwe: A case study

    Directory of Open Access Journals (Sweden)

    Rodreck David

    2013-07-01

    Objectives: This study investigated a commercial bank’s document management approaches in a bid to ascertain the costs and benefits of each strategy and related issues. Method: A quantitative research approach was employed through a case study which was used to gather data from a sampled population in the bank. Results: The document management approaches used were not coordinated to improve operational efficiency. There were regulations governing documents management. The skills and competences of staff on both document management and cost analysis are limited. That is partly due to limited training opportunities availed to them. That means that economies are not achieved in the management of records. That has a negative impact on the overall efficiency, effectiveness and legal compliance of the banking institution. Conclusion: The financial institutions should create regulations enabling periodical cost-benefit analysis of document management regimes used by the bank at least at quarterly intervals as recommended by the National Archives of Australia. A hybrid approach in managing records is recommended for adoption by the financial institution. There should be on-the-job staff training complimented by attendance at relevant workshops and seminars to improve the staff’s understanding of both the cost-benefit analysis concept and document management.

  15. DOE Integrated Safeguards and Security (DISS) historical document archival and retrieval analysis, requirements and recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Guyer, H.B.; McChesney, C.A.

    1994-10-07

    The overall primary Objective of HDAR is to create a repository of historical personnel security documents and provide the functionality needed for archival and retrieval use by other software modules and application users of the DISS/ET system. The software product to be produced from this specification is the Historical Document Archival and Retrieval Subsystem The product will provide the functionality to capture, retrieve and manage documents currently contained in the personnel security folders in DOE Operations Offices vaults at various locations across the United States. The long-term plan for DISS/ET includes the requirement to allow for capture and storage of arbitrary, currently undefined, clearance-related documents that fall outside the scope of the ``cradle-to-grave`` electronic processing provided by DISS/ET. However, this requirement is not within the scope of the requirements specified in this document.

  16. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  17. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  18. Documental analysis of Brazilian academic production about evolution teaching (1990-2010: characterization and proposals

    Directory of Open Access Journals (Sweden)

    Caio Samuel Franciscati da Silva

    2013-08-01

    Full Text Available The quantitative and qualitative increasing of researches in Science teaching imposes the periodical mapping need of scientific production about the subject, with a view to identify its own characteristics and tendencies. In this context, researches such as “State of art”, given his executrix character, constitute in modality inquiring that allow us to outline historical scenes for a given area (or subarea of knowledge. In this light, this work aims to outline the panorama of Brazilian academic production, represented by dissertations and thesis, in evolution teaching between 1990- 2010. The documents susceptible of analysis were raised in three online data basis and its selection happened from the reading of titles, abstracts and key-words with views to identify the dissertations and thesis that truly approached the Evolution Teaching. The found results evidence the predominance of dissertations related to the number of thesis and the concentration of academic production in Evolution Teaching in Brazil south-east region, especially in São Paulo state. In researches trends we verified the prevalence of investigations related to the previous conceptions of students and professors (in all teaching levels and to teacher training.

  19. LEARNING STYLES BASED ADAPTIVE INTELLIGENT TUTORING SYSTEMS: DOCUMENT ANALYSIS OF ARTICLES PUBLISHED BETWEEN 2001. AND 2016.

    Directory of Open Access Journals (Sweden)

    Amit Kumar

    2017-12-01

    Full Text Available Actualizing instructional intercessions to suit learner contrasts has gotten extensive consideration. Among these individual contrast factors, the observational confirmation in regards to the academic benefit of learning styles has been addressed, yet the examination on the issue proceeds. Late improvements in web-based executions have driven researchers to re-examine the learning styles in adaptive tutoring frameworks. Adaptivity in intelligent tutoring systems is strongly influenced by the learning style of a learner. This study involved extensive document analysis of adaptive tutoring systems based on learning styles. Seventy-eight studies in literature from 2001 to 2016 were collected and classified under select parameters such as main focus, purpose, research types, methods, types and levels of participants, field/area of application, learner modelling, data gathering tools used and research findings. The current studies reveal that majority of the studies defined a framework or architecture of adaptive intelligent tutoring system (AITS while others focused on impact of AITS on learner satisfaction and academic outcomes. Currents trends, gaps in literature and ications were discussed.

  20. Supporting documents for LLL area 27 (410 area) safety analysis reports, Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    Odell, B. N. [comp.

    1977-02-01

    The following appendices are common to the LLL Safety Analysis Reports Nevada Test Site and are included here as supporting documents to those reports: Environmental Monitoring Report for the Nevada Test Site and Other Test Areas Used for Underground Nuclear Detonations, U. S. Environmental Protection Agency, Las Vegas, Rept. EMSL-LV-539-4 (1976); Selected Census Information Around the Nevada Test Site, U. S. Environmental Protection Agency, Las Vegas, Rept. NERC-LV-539-8 (1973); W. J. Hannon and H. L. McKague, An Examination of the Geology and Seismology Associated with Area 410 at the Nevada Test Site, Lawrence Livermore Laboratory, Livermore, Rept. UCRL-51830 (1975); K. R. Peterson, Diffusion Climatology for Hypothetical Accidents in Area 410 of the Nevada Test Site, Lawrence Livermore Laboratory, Livermore, Rept. UCRL-52074 (1976); J. R. McDonald, J. E. Minor, and K. C. Mehta, Development of a Design Basis Tornado and Structural Design Criteria for the Nevada Test Site, Nevada, Lawrence Livermore Laboratory, Livermore, Rept. UCRL-13668 (1975); A. E. Stevenson, Impact Tests of Wind-Borne Wooden Missiles, Sandia Laboratories, Tonopah, Rept. SAND 76-0407 (1976); and Hydrology of the 410 Area (Area 27) at the Nevada Test Site.

  1. Study of TCP densification via image analysis

    International Nuclear Information System (INIS)

    Silva, R.C.; Alencastro, F.S.; Oliveira, R.N.; Soares, G.A.

    2011-01-01

    Among ceramic materials that mimic human bone, β-type tri-calcium phosphate (β-TCP) has shown appropriate chemical stability and superior resorption rate when compared to hydroxyapatite. In order to increase its mechanical strength, the material is sintered, under controlled time and temperature conditions, to obtain densification without phase change. In the present work, tablets were produced via uniaxial compression and then sintered at 1150°C for 2h. The analysis via XRD and FTIR showed that the sintered tablets were composed only by β-TCP. The SEM images were used for quantification of grain size and volume fraction of pores, via digital image analysis. The tablets showed small pore fraction (between 0,67% and 6,38%) and homogeneous grain size distribution (∼2μm). Therefore, the analysis method seems viable to quantify porosity and grain size. (author)

  2. Analysis of Informed Consent Document Utilization in a Minimal-Risk Genetic Study

    Science.gov (United States)

    Desch, Karl; Li, Jun; Kim, Scott; Laventhal, Naomi; Metzger, Kristen; Siemieniak, David; Ginsburg, David

    2012-01-01

    Background The signed informed consent document certifies that the process of informed consent has taken place and provides research participants with comprehensive information about their role in the study. Despite efforts to optimize the informed consent document, only limited data are available about the actual use of consent documents by participants in biomedical research. Objective To examine the use of online consent documents in a minimal-risk genetic study. Design Prospective sibling cohort enrolled as part of a genetic study of hematologic and common human traits. Setting University of Michigan Campus, Ann Arbor, Michigan. Participants Volunteer sample of healthy persons with 1 or more eligible siblings aged 14 to 35 years. Enrollment was through targeted e-mail to student lists. A total of 1209 persons completed the study. Measurements Time taken by participants to review a 2833-word online consent document before indicating consent and identification of a masked hyperlink near the end of the document. Results The minimum predicted reading time was 566 seconds. The median time to consent was 53 seconds. A total of 23% of participants consented within 10 seconds, and 93% of participants consented in less than the minimum predicted reading time. A total of 2.5% of participants identified the masked hyperlink. Limitation The online consent process was not observed directly by study investigators, and some participants may have viewed the consent document more than once. Conclusion Few research participants thoroughly read the consent document before agreeing to participate in this genetic study. These data suggest that current informed consent documents, particularly for low-risk studies, may no longer serve the intended purpose of protecting human participants, and the role of these documents should be reassessed. Primary Funding Source National Institutes of Health. PMID:21893624

  3. Analysis of renal nuclear medicine images

    International Nuclear Information System (INIS)

    Jose, R.M.J.

    2000-01-01

    Nuclear medicine imaging of the renal system involves producing time-sequential images showing the distribution of a radiopharmaceutical in the renal system. Producing numerical and graphical data from nuclear medicine studies requires defining regions of interest (ROIs) around various organs within the field of view, such as the left kidney, right kidney and bladder. Automating this process has several advantages: a saving of a clinician's time; enhanced objectivity and reproducibility. This thesis describes the design, implementation and assessment of an automatic ROI generation system. The performance of the system described in this work is assessed by comparing the results to those obtained using manual techniques. Since nuclear medicine images are inherently noisy, the sequence of images is reconstructed using the first few components of a principal components analysis in order to reduce the noise in the images. An image of the summed reconstructed sequence is then formed. This summed image is segmented by using an edge co-occurrence matrix as a feature space for simultaneously classifying regions and locating boundaries. Two methods for assigning the regions of a segmented image to organ class labels are assessed. The first method is based on using Dempster-Shafer theory to combine uncertain evidence from several sources into a single evidence; the second method makes use of a neural network classifier. The use of each technique in classifying the regions of a segmented image are assessed in separate experiments using 40 real patient-studies. A comparative assessment of the two techniques shows that the neural network produces more accurate region labels for the kidneys. The optimum neural system is determined experimentally. Results indicate that combining temporal and spatial information with a priori clinical knowledge produces reasonable ROIs. Consistency in the neural network assignment of regions is enhanced by taking account of the contextual

  4. Development of SRC-I product analysis. Volume 3. Documentation of procedures

    Energy Technology Data Exchange (ETDEWEB)

    Schweighardt, F.K.; Kingsley, I.S.; Cooper, F.E.; Kamzelski, A.Z.; Parees, D.M.

    1983-09-01

    This section documents the BASIC computer program written to simulate Wilsonville's GC-simulated distillation (GCSD) results at APCI-CRSD Trexlertown. The GC conditions used at APCI for the Wilsonville GCSD analysis of coal-derived liquid samples were described in the SRC-I Quarterly Technical Report, April-June 1981. The approach used to simulate the Wilsonville GCSD results is also from an SRC-I Quarterly Technical Report and is reproduced in Appendix VII-A. The BASIC computer program is described in the attached Appendix VII-B. Analysis of gases produced during coal liquefaction generates key information needed to determine product yields for material balance and process control. Gas samples from the coal process development unit (CPDU) and tubing bombs are the primary samples analyzed. A Carle gas chromatographic system was used to analyze coal liquefaction gas samples. A BASIC computer program was written to calculate the gas chromatographic peak area results into mole percent results. ICRC has employed several analytical workup procedures to determine the amount of distillate, oils, asphaltenes, preasphaltenes, and residue in SRC-I process streams. The ASE procedure was developed using Conoco's liquid column fractionation (LC/F) method as a model. In developing the ASE procedure, ICRC was able to eliminate distillation, and therefore quantify the oils fraction in one extraction step. ASE results were shown to be reproducible within +- 2 wt %, and to yield acceptable material balances. Finally, the ASE method proved to be the least affected by sample composition.

  5. Rapid Analysis and Exploration of Fluorescence Microscopy Images

    OpenAIRE

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason; Steininger, Robert J; Wu, Lani; Altschuler, Steven

    2014-01-01

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard.

  6. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  7. An image analyzer system for the analysis of nuclear traces

    International Nuclear Information System (INIS)

    Cuapio O, A.

    1990-10-01

    Inside the project of nuclear traces and its application techniques to be applied in the detection of nuclear reactions of low section (non detectable by conventional methods), in the study of accidental and personal neutron dosemeters, and other but, are developed. All these studies are based on the fact that the charged particles leave latent traces of dielectric that if its are engraved with appropriate chemical solutions its are revealed until becoming visible to the optical microscope. From the analysis of the different trace forms, it is possible to obtain information of the characteristic parameters of the incident particles (charge, mass and energy). Of the density of traces it is possible to obtain information of the flow of the incident radiation and consequently of the received dose. For carry out this analysis has been designed and coupled different systems, that it has allowed the solution of diverse outlined problems. Notwithstanding it has been detected that to make but versatile this activity is necessary to have an Image Analyzer System that allow us to digitize, to process and to display the images with more rapidity. The present document, presents the proposal to carry out the acquisition of the necessary components for to assembling an Image Analyzing System, like support to the mentioned project. (Author)

  8. Image sequence analysis workstation for multipoint motion analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  9. Quantitative Analysis in Nuclear Medicine Imaging

    CERN Document Server

    2006-01-01

    This book provides a review of image analysis techniques as they are applied in the field of diagnostic and therapeutic nuclear medicine. Driven in part by the remarkable increase in computing power and its ready and inexpensive availability, this is a relatively new yet rapidly expanding field. Likewise, although the use of radionuclides for diagnosis and therapy has origins dating back almost to the discovery of natural radioactivity itself, radionuclide therapy and, in particular, targeted radionuclide therapy has only recently emerged as a promising approach for therapy of cancer and, to a lesser extent, other diseases. As effort has, therefore, been made to place the reviews provided in this book in a broader context. The effort to do this is reflected by the inclusion of introductory chapters that address basic principles of nuclear medicine imaging, followed by overview of issues that are closely related to quantitative nuclear imaging and its potential role in diagnostic and therapeutic applications. ...

  10. Promoting Entrepreneurship in Higher Education: Analysis of European Union Documents and Lithuanian Case Study

    Directory of Open Access Journals (Sweden)

    Viktorija Stokaitė

    2013-01-01

    Full Text Available The Chairman of the European Commission J.M Barroso, as the main “Europe 2020” strategic target for the upcoming ten years, indicates the creation of an innovative, stable and integrated economy. Higher education and business communication promotion and synergy are being dedicated as a prior target for all EU and EU member countries to be able to continue increasing employment, productivity, as well as social connections. The research of the enterprise and its stimulation in higher education (using higher education and business collaboration is not deep enough, although the enterprise’s multiple phenomenon were analysed from many perspectives. It is being planned to raise EU’s investments to youth much more compared to other main parts of the budget in 2014-2020. Analysis of European Union documents and Lithuanian case studies was chosen on purpose according to the enterprise’s created added value for European development. Realization, creativity, initiative, motivation, taking risks, planning and reaching personal goals are the main parts of the enterprise. Development of these skills in higher education is becoming very important because of “the advantage of the competitiveness is being determined by country‘s social education therefore the effective usage of human recourses is the most important part seeking to increase stabile economical and social well being.” Research of the EU’s and Lithuania’s national documentation and scientific literature review of entrepreneurship in higher education identifies the current enterprise position in education. According to the analyses of the documentation and the scientific literature review, the enterprise’s evaluation level was appointed. The new beginning of the enterprise in higher education is being started after the research was done and centrepiece’s promotion was critically evaluated in the EU and Lithuania. In October, 2011 the committee of the EU created a new work

  11. Longitudinal analysis on utilization of medical document management system in a hospital with EPR implementation.

    Science.gov (United States)

    Kuwata, Shigeki; Yamada, Hitomi; Park, Keunsik

    2011-01-01

    Document management systems (DMS) have widespread in major hospitals in Japan as a platform to digitize the paper-based records being out of coverage by EPR. This study aimed to examine longitudinal trends of actual use of DMS in a hospital in which EPR had been in operation, which would be conducive to planning the further information management system in the hospital. Degrees of utilization of electronic documents and templates with DMS were analyzed based on data extracted from a university-affiliated hospital with EPR. As a result, it was found that the number of electronic documents as well as scanned documents circulating at the hospital tended to increase. The result indicated that replacement of paper-based documents with electronic documents did not occur. Therefore it was anticipated that the need for DMS would continue to increase in the hospital. The methods used this study to analyze the trend of DMS utilization would be applicable to other hospitals with with a variety of DMS implementation, such as electronic storage by scanning documents or paper preservation that is compatible with EPR.

  12. Application of 3D documentation and geometric reconstruction methods in traffic accident analysis: with high resolution surface scanning, radiological MSCT/MRI scanning and real data based animation.

    Science.gov (United States)

    Buck, Ursula; Naether, Silvio; Braun, Marcel; Bolliger, Stephan; Friederich, Hans; Jackowski, Christian; Aghayev, Emin; Christe, Andreas; Vock, Peter; Dirnhofer, Richard; Thali, Michael J

    2007-07-20

    The examination of traffic accidents is daily routine in forensic medicine. An important question in the analysis of the victims of traffic accidents, for example in collisions between motor vehicles and pedestrians or cyclists, is the situation of the impact. Apart from forensic medical examinations (external examination and autopsy), three-dimensional technologies and methods are gaining importance in forensic investigations. Besides the post-mortem multi-slice computed tomography (MSCT) and magnetic resonance imaging (MRI) for the documentation and analysis of internal findings, highly precise 3D surface scanning is employed for the documentation of the external body findings and of injury-inflicting instruments. The correlation of injuries of the body to the injury-inflicting object and the accident mechanism are of great importance. The applied methods include documentation of the external and internal body and the involved vehicles and inflicting tools as well as the analysis of the acquired data. The body surface and the accident vehicles with their damages were digitized by 3D surface scanning. For the internal findings of the body, post-mortem MSCT and MRI were used. The analysis included the processing of the obtained data to 3D models, determination of the driving direction of the vehicle, correlation of injuries to the vehicle damages, geometric determination of the impact situation and evaluation of further findings of the accident. In the following article, the benefits of the 3D documentation and computer-assisted, drawn-to-scale 3D comparisons of the relevant injuries with the damages to the vehicle in the analysis of the course of accidents, especially with regard to the impact situation, are shown on two examined cases.

  13. Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox

    Directory of Open Access Journals (Sweden)

    Andre Santos Ribeiro

    2015-07-01

    Full Text Available Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity.Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI and positron emission tomography (PET. It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also.Results. It was observed both a high inter

  14. Three Years of Unmediated Document Delivery: An Analysis and Consideration of Collection Development Priorities.

    Science.gov (United States)

    Chan, Emily K; Mune, Christina; Wang, YiPing; Kendall, Susan L

    2016-01-01

    Like most academic libraries, San José State University Library is struggling to meet users' rising expectations for immediate information within the financial confines of a flat budget. To address acquisition of nonsubscribed article content, particularly outside of business hours, San José State University Library implemented Copyright Clearance Center's Get It Now, a document delivery service. Three academic years of analyzed data, which involves more than 10,000 requests, and the subsequent collection development actions taken by the library will be discussed. The value and challenges of patron-driven, unmediated document delivery services in conjunction with traditional document delivery services will be considered.

  15. LESSONS LEARNED IN DEVELOPMENT OF THE HANFORD SWOC MASTER DOCUMENTED SAFETY ANALYSIS (MDSA) and IMPLEMENTATION VALIDATION REVIEW (IVR)

    International Nuclear Information System (INIS)

    MORENO, M.R.

    2004-01-01

    DOE set clear expectations on a cost-effective approach for achieving compliance with the Nuclear Safety Management requirements (20 CFR 830, Nuclear Safety Rule), which ensured long-term benefit to Hanford, via issuance of a nuclear safety strategy in February 2003. To facilitate implementation of these expectations, tools were developed to streamline and standardize safety analysis and safety document development with the goal of a shorter and more predictable DOE approval cycle. A Hanford Safety Analysis and Risk Assessment Handbook (SARAH) was approved to standardize methodologies for development of safety analyses. A Microsoft Excel spreadsheet (RADIDOSE) was approved for the evaluation of radiological consequences for accident scenarios often postulated at Hanford. Standard safety management program chapters were approved for use as a means of compliance with the programmatic chapters of DOE-STD-3009, ''Preparation Guide for U.S. Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports''. An in-process review was developed between DOE and the Contractor to facilitate DOE approval and provide early course correction. The new Documented Safety Analysis (DSA) developed to address the operations of four facilities within the Solid Waste Operations Complex (SWOC) necessitated development of an Implementation Validation Review (IVR) process. The IVR process encompasses the following objectives: safety basis controls and requirements are adequately incorporated into appropriate facility documents and work instructions, facility personnel are knowledgeable of controls and requirements, and the DSA/TSR controls have been implemented. Based on DOE direction and safety analysis tools, four waste management nuclear facilities were integrated into one safety basis document. With successful completion of implementation of this safety document, lessons-learned from the in-process review, safety analysis tools and IVR process were documented for future action

  16. Seismic analysis of the Nuclear Fuel Service Reprocessing Plant at West Valley, New York: documentation

    International Nuclear Information System (INIS)

    Murray, R.C.; Nelson, T.A.; Davito, A.M.

    1977-01-01

    This material was generated as part of a seismic case review of the NFS Reprocessing Plant. This study is documented in UCRL-52266. The material is divided into two parts: mathematical model information, and ultimate load calculations and comparisons

  17. Semiautomatic digital imaging system for cytogenetic analysis

    International Nuclear Information System (INIS)

    Chaubey, R.C.; Chauhan, P.C.; Bannur, S.V.; Kulgod, S.V.; Chadda, V.K.; Nigam, R.K.

    1999-08-01

    The paper describes a digital image processing system, developed indigenously at BARC for size measurement of microscopic biological objects such as cell, nucleus and micronucleus in mouse bone marrow; cytochalasin-B blocked human lymphocytes in-vitro; numerical counting and karyotyping of metaphase chromosomes of human lymphocytes. Errors in karyotyping of chromosomes by the imaging system may creep in due to lack of well-defined position of centromere or extensive bending of chromosomes, which may result due to poor quality of preparation. Good metaphase preparations are mandatory for precise and accurate analysis by the system. Additional new morphological parameters about each chromosome have to be incorporated to improve the accuracy of karyotyping. Though the experienced cytogenetisist is the final judge; however, the system assists him/her to carryout analysis much faster as compared to manual scoring. Further, experimental studies are in progress to validate different software packages developed for various cytogenetic applications. (author)

  18. Morphometric image analysis of giant vesicles

    DEFF Research Database (Denmark)

    Husen, Peter Rasmussen; Arriaga, Laura; Monroy, Francisco

    2012-01-01

    We have developed a strategy to determine lengths and orientations of tie lines in the coexistence region of liquid-ordered and liquid-disordered phases of cholesterol containing ternary lipid mixtures. The method combines confocal-fluorescence-microscopy image stacks of giant unilamellar vesicles...... (GUVs), a dedicated 3D-image analysis, and a quantitative analysis based in equilibrium thermodynamic considerations. This approach was tested in GUVs composed of 1,2-dioleoyl-sn-glycero-3-phosphocholine/1,2-palmitoyl-sn-glycero-3-phosphocholine/cholesterol. In general, our results show a reasonable...... agreement with previously reported data obtained by other methods. For example, our computed tie lines were found to be nonhorizontal, indicating a difference in cholesterol content in the coexisting phases. This new, to our knowledge, analytical strategy offers a way to further exploit fluorescence...

  19. A methodology for developing high-integrity knowledge base using document analysis and ECPN matrix analysis with backward simulation

    International Nuclear Information System (INIS)

    Park, Joo Hyun

    1999-02-01

    When transitions occur in large systems such as nuclear power plants (NPPs) or industrial process plants, it is often difficult to diagnose them. Various computer-based operator-aiding systems have been developed in order to help operators diagnose the transitions of the plants. In procedures for developing knowledge base system like operator-aiding systems, the knowledge acquisition and the knowledge base verification are core activities. This dissertation describes a knowledge acquisition method and a knowledge base verification method for developing high-integrity knowledge base system of NPP expert systems. The knowledge acquisition is one of the most difficult and time-consuming activities in developing knowledge base systems. There are two kinds of knowledge acquisition methods in view of knowledge sources. One is an acquisition method from human expert. This method, however, is not adequate to acquire the knowledge of NPP expert systems because the number of experts is not sufficient. In this work, we propose a novel knowledge acquisition method through documents analysis. The knowledge base can be built correctly, rapidly, and partially automatically through this method. This method is especially useful when it is difficult to find domain experts. Reliability of knowledge base systems depends on the quality of their knowledge base. Petri Net has been used to verify knowledge bases due to their formal outputs. The methods using Petri Net however are difficult to apply to large and complex knowledge bases because the Net becomes very large and complex. Also, with Petri Net, it is difficult to find proper input patterns that make anomalies occur. In order to overcome this difficulty, in this work, the anomaly candidates detection methods are developed based on Extended CPN (ECPN) matrix analysis. This work also defines the backward simulation of CPN to find compact input patterns for anomaly detection, which starts simulation from the anomaly candidates

  20. Allocation of home care services by municipalities in Norway: a document analysis.

    Science.gov (United States)

    Holm, Solrun G; Mathisen, Terje A; Sæterstrand, Torill M; Brinchmann, Berit S

    2017-09-22

    In Norway, elder care is primarily a municipal responsibility. Municipal health services strive to offer the 'lowest level of effective care,' and home healthcare services are defined as the lowest level of care in Norway. Municipalities determine the type(s) of service and the amount of care applicants require. The services granted are outlined in an individual decision letter, which serves as a contract between the municipality and the home healthcare recipient. The purpose of this study was to gain insight into the scope and duration of home healthcare services allocated by municipalities and to determine where home care recipients live in relation to home healthcare service offices. A document analysis was performed on data derived from 833 letters to individuals allocated home care services in two municipalities in Northern Norway (Municipality A = 500 recipients, Municipality B = 333 recipients). In Municipality A, 74% of service hours were allotted to home health nursing, 12% to practical assistance, and 14% to support contact; in Municipality B, the distribution was 73%, 19%, and 8%, respectively. Both municipalities allocated home health services with no service end date (41% and 85% of the total services, respectively). Among recipients of "expired" services, 25% in Municipality A and 7% in Municipality B continued to receive assistance. Our findings reveal that the municipalities adhered to the goal for home care recipients to remain at home as long as possible before moving into a nursing home. The findings also indicate that the system for allocating home healthcare services may not be fair, as the municipalities lacked procedures for revising individual decisions. Our findings indicate that local authorities should closely examine how they design individual decisions and increase their awareness of how long a service should be provided.

  1. The Intersectoral Collaboration Document for Cancer Risk Factors Reduction: Method and Stakeholder Analysis

    Directory of Open Access Journals (Sweden)

    Ali-Asghar Kolahi

    2016-03-01

    Full Text Available Background and Objective: Cancers are one of the most important public health issues and the third leading cause of mortality after cardiovascular diseases and injuries in Iran. The most common cancers reported in the recent years have been included skin, stomach, breast, colon, bladder, leukemia, and esophagus respectively. Control of cancer as one of the three main health system priorities of Iran, needs a specific roadmap and clear task definition for involved organizations. This study provides stakeholder analysis include determining the roles of Ministry of Health and Medical Education as the custodian of the national health and the duties of other beneficiary organizations to reduce the risk of cancer for cooperation with a scientific approach and systematic methodology.Materials and Methods: This health system research project was performed by participation of Social Determinants of Health Research Center of Shahid Beheshti University of Medical Sciences, Office of the Non-Communicable Diseases of the Ministry of Health and Medical Education and other stakeholders in 2013. At first, the strategic committee was established and the stakeholders were identified and analyzed. Then the quantitative data were collected by searching in national database concern incidence, prevalence, and burden of all types of cancers. At the last with the qualitative approach, a systematic review of the studies, documents and reports was conducted as well as conversing for the national strategic plans of Iran and other countries and the experts’ views regarding management of the cancer risk factors. In practice, role and responsibilities of each stakeholder were practically analyzed. Then the risk factors identified and the effective evidence-based interventions were determined for each cancer and finally the role of the Ministry of Health were set as the responsible or co-worker and also the role of the other organizations separately clarified in each

  2. Image Analysis for Nail-fold Capillaroscopy

    OpenAIRE

    Vucic, Vladimir

    2015-01-01

    Detection of diseases in an early stage is very important since it can make the treatment of patients easier, safer and more ecient. For the detection of rheumatic diseases, and even prediction of tendencies towards such diseases, capillaroscopy is becoming an increasingly recognized method. Nail-fold capillaroscopy is a non-invasive imaging technique that is used for analysis of microcirculation abnormalities that may lead todisease like systematic sclerosis, Reynauds phenomenon and others. ...

  3. Computerized analysis of brain perfusion parameter images

    International Nuclear Information System (INIS)

    Turowski, B.; Haenggi, D.; Wittsack, H.J.; Beck, A.; Aurich, V.

    2007-01-01

    Purpose: The development of a computerized method which allows a direct quantitative comparison of perfusion parameters. The display should allow a clear direct comparison of brain perfusion parameters in different vascular territories and over the course of time. The analysis is intended to be the basis for further evaluation of cerebral vasospasm after subarachnoid hemorrhage (SAH). The method should permit early diagnosis of cerebral vasospasm. Materials and Methods: The Angiotux 2D-ECCET software was developed with a close cooperation between computer scientists and clinicians. Starting from parameter images of brain perfusion, the cortex was marked, segmented and assigned to definite vascular territories. The underlying values were averages for each segment and were displayed in a graph. If a follow-up was available, the mean values of the perfusion parameters were displayed in relation to time. The method was developed under consideration of CT perfusion values but is applicable for other methods of perfusion imaging. Results: Computerized analysis of brain perfusion parameter images allows an immediate comparison of these parameters and follow-up of mean values in a clear and concise manner. Values are related to definite vascular territories. The tabular output facilitates further statistic evaluations. The computerized analysis is precisely reproducible, i. e., repetitions result in exactly the same output. (orig.)

  4. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  5. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  6. Clinical decision support improves quality of telephone triage documentation--an analysis of triage documentation before and after computerized clinical decision support.

    Science.gov (United States)

    North, Frederick; Richards, Debra D; Bremseth, Kimberly A; Lee, Mary R; Cox, Debra L; Varkey, Prathibha; Stroebel, Robert J

    2014-03-20

    Clinical decision support (CDS) has been shown to be effective in improving medical safety and quality but there is little information on how telephone triage benefits from CDS. The aim of our study was to compare triage documentation quality associated with the use of a clinical decision support tool, ExpertRN©. We examined 50 triage documents before and after a CDS tool was used in nursing triage. To control for the effects of CDS training we had an additional control group of triage documents created by nurses who were trained in the CDS tool, but who did not use it in selected notes. The CDS intervention cohort of triage notes was compared to both the pre-CDS notes and the CDS trained (but not using CDS) cohort. Cohorts were compared using the documentation standards of the American Academy of Ambulatory Care Nursing (AAACN). We also compared triage note content (documentation of associated positive and negative features relating to the symptoms, self-care instructions, and warning signs to watch for), and documentation defects pertinent to triage safety. Three of five AAACN documentation standards were significantly improved with CDS. There was a mean of 36.7 symptom features documented in triage notes for the CDS group but only 10.7 symptom features in the pre-CDS cohort (p < 0.0001) and 10.2 for the cohort that was CDS-trained but not using CDS (p < 0.0001). The difference between the mean of 10.2 symptom features documented in the pre-CDS and the mean of 10.7 symptom features documented in the CDS-trained but not using was not statistically significant (p = 0.68). CDS significantly improves triage note documentation quality. CDS-aided triage notes had significantly more information about symptoms, warning signs and self-care. The changes in triage documentation appeared to be the result of the CDS alone and not due to any CDS training that came with the CDS intervention. Although this study shows that CDS can improve documentation, further study is needed

  7. Automatic dirt trail analysis in dermoscopy images.

    Science.gov (United States)

    Cheng, Beibei; Joe Stanley, R; Stoecker, William V; Osterwise, Christopher T P; Stricklin, Sherea M; Hinton, Kristen A; Moss, Randy H; Oliviero, Margaret; Rabinovitz, Harold S

    2013-02-01

    Basal cell carcinoma (BCC) is the most common cancer in the US. Dermatoscopes are devices used by physicians to facilitate the early detection of these cancers based on the identification of skin lesion structures often specific to BCCs. One new lesion structure, referred to as dirt trails, has the appearance of dark gray, brown or black dots and clods of varying sizes distributed in elongated clusters with indistinct borders, often appearing as curvilinear trails. In this research, we explore a dirt trail detection and analysis algorithm for extracting, measuring, and characterizing dirt trails based on size, distribution, and color in dermoscopic skin lesion images. These dirt trails are then used to automatically discriminate BCC from benign skin lesions. For an experimental data set of 35 BCC images with dirt trails and 79 benign lesion images, a neural network-based classifier achieved a 0.902 are under a receiver operating characteristic curve using a leave-one-out approach. Results obtained from this study show that automatic detection of dirt trails in dermoscopic images of BCC is feasible. This is important because of the large number of these skin cancers seen every year and the challenge of discovering these earlier with instrumentation. © 2011 John Wiley & Sons A/S.

  8. Working to make an image: an analysis of three Philip Morris corporate image media campaigns.

    Science.gov (United States)

    Szczypka, Glen; Wakefield, Melanie A; Emery, Sherry; Terry-McElrath, Yvonne M; Flay, Brian R; Chaloupka, Frank J

    2007-10-01

    To describe the nature and timing of, and population exposure to, Philip Morris USA's three explicit corporate image television advertising campaigns and explore the motivations behind each campaign. Analysis of television ratings from the largest 75 media markets in the United States, which measure the reach and frequency of population exposure to advertising; copies of all televised commercials produced by Philip Morris; and tobacco industry documents, which provide insights into the specific goals of each campaign. Household exposure to the "Working to Make a Difference: the People of Philip Morris" averaged 5.37 ads/month for 27 months from 1999-2001; the "Tobacco Settlement" campaign averaged 10.05 ads/month for three months in 2000; and "PMUSA" averaged 3.11 ads/month for the last six months in 2003. The percentage of advertising exposure that was purchased in news programming in order to reach opinion leaders increased over the three campaigns from 20%, 39% and 60%, respectively. These public relations campaigns were designed to counter negative images, increase brand recognition, and improve the financial viability of the company. Only one early media campaign focused on issues other than tobacco, whereas subsequent campaigns have been specifically concerned with tobacco issues, and more targeted to opinion leaders. The size and timing of the advertising buys appeared to be strategically crafted to maximise advertising exposure for these population subgroups during critical threats to Philip Morris's public image.

  9. Integration of scanned document management with the anatomic pathology laboratory information system: analysis of benefits.

    Science.gov (United States)

    Schmidt, Rodney A; Simmons, Kim; Grimm, Erin E; Middlebrooks, Michael; Changchien, Rosy

    2006-11-01

    Electronic document management systems (EDMSs) have the potential to improve the efficiency of anatomic pathology laboratories. We implemented a novel but simple EDMS for scanned documents as part of our laboratory information system (AP-LIS) and collected cost-benefit data with the intention of discerning the value of such a system in general and whether integration with the AP-LIS is advantageous. We found that the direct financial benefits are modest but the indirect and intangible benefits are large. Benefits of time savings and access to data particularly accrued to pathologists and residents (3.8 h/d saved for 26 pathologists and residents). Integrating the scanned document management system (SDMS) into the AP-LIS has major advantages in terms of workflow and overall simplicity. This simple, integrated SDMS is an excellent value in a practice like ours, and many of the benefits likely apply in other practice settings.

  10. Narrative review: the promotion of gabapentin: an analysis of internal industry documents.

    Science.gov (United States)

    Steinman, Michael A; Bero, Lisa A; Chren, Mary-Margaret; Landefeld, C Seth

    2006-08-15

    Internal documents from the pharmaceutical industry provide a unique window for understanding the structure and methods of pharmaceutical promotion. Such documents have become available through litigation concerning the promotion of gabapentin (Neurontin, Pfizer, Inc., New York, New York) for off-label uses. To describe how gabapentin was promoted, focusing on the use of medical education, research, and publication. Court documents available to the public from United States ex. rel David Franklin vs. Pfizer, Inc., and Parke-Davis, Division of Warner-Lambert Company, mostly from 1994-1998. All documents were reviewed by 1 author, with selected review by coauthors. Marketing strategies and tactics were identified by using an iterative process of review, discussion, and re-review of selected documents. The promotion of gabapentin was a comprehensive and multifaceted process. Advisory boards, consultants meetings, and accredited continuing medical education events organized by third-party vendors were used to deliver promotional messages. These tactics were augmented by the recruitment of local champions and engagement of thought leaders, who could be used to communicate favorable messages about gabapentin to their physician colleagues. Research and scholarship were also used for marketing by encouraging "key customers" to participate in research, using a large study to advance promotional themes and build market share, paying medical communication companies to develop and publish articles about gabapentin for the medical literature, and planning to suppress unfavorable study results. Most available documents were submitted by the plaintiff and may not represent a complete picture of marketing practices. Activities traditionally considered independent of promotional intent, including continuing medical education and research, were extensively used to promote gabapentin. New strategies are needed to ensure a clear separation between scientific and commercial activity.

  11. Ethics, Power, Internationalisation and the Postcolonial: A Foucauldian Discourse Analysis of Policy Documents in Two Scottish Universities

    Science.gov (United States)

    Guion Akdag, Emma; Swanson, Dalene M.

    2018-01-01

    This paper provides a critical discussion of internationalisation in Higher Education (HE), and exemplifies a process of uncovering the investments in power and ideology through the partial analysis of four strategic internationalisation documents at two Scottish Higher Education institutions, as part of an ongoing international study into the…

  12. Market Analysis and Consumer Impacts Source Document. Part I. The Motor Vehicle Market in the Late 1970's

    Science.gov (United States)

    1980-12-01

    The source document on motor vehicle market analysis and consumer impact consists of three parts. Part I is an integrated overview of the motor vehicle market in the late 1970's, with sections on the structure of the market, motor vehicle trends, con...

  13. Fernando Pessoa and Aleister Crowley: new discoveries and a new analysis of the documents in the Gerald Yorke Collection

    NARCIS (Netherlands)

    Pasi, M.; Ferrari, P.

    2012-01-01

    The documents concerning the relationship between Fernando Pessoa and Aleister Crowley preserved in the Yorke Collection at the Warburg Institute (London) have been known for some time. However, recent new findings have prompted a new analysis of the dossier. The purpose of this article is to have a

  14. "I Like to Plan Events": A Document Analysis of Essays Written by Applicants to a Public Relations Program

    Science.gov (United States)

    Taylor, Ronald E.

    2016-01-01

    A document analysis of 249 essays written during a 5-year period by applicants to a public relations program at a major state university in the southeast suggests that there are enduring reasons why students choose to major in public relations. Public relations is described as a major that allows for and encourages creative expression and that…

  15. Remote Sensing Digital Image Analysis An Introduction

    CERN Document Server

    Richards, John A

    2013-01-01

    Remote Sensing Digital Image Analysis provides the non-specialist with a treatment of the quantitative analysis of satellite and aircraft derived remotely sensed data. Since the first edition of the book there have been significant developments in the algorithms used for the processing and analysis of remote sensing imagery; nevertheless many of the fundamentals have substantially remained the same.  This new edition presents material that has retained value since those early days, along with new techniques that can be incorporated into an operational framework for the analysis of remote sensing data. The book is designed as a teaching text for the senior undergraduate and postgraduate student, and as a fundamental treatment for those engaged in research using digital image processing in remote sensing.  The presentation level is for the mathematical non-specialist.  Since the very great number of operational users of remote sensing come from the earth sciences communities, the text is pitched at a leve...

  16. What makes papers visible on social media? An analysis of various document characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Zahedi, Z.; Costas, R.; Lariviere, V.; Haustein, S.

    2016-07-01

    In this study we have investigated the relationship between different document characteristics and the number of Mendeley readership counts, tweets, Facebook posts, mentions in blogs and mainstream media for 1.3 million papers published in journals covered by the Web of Science (WoS). It aims to demonstrate that how factors affecting various social media-based indicators differ from those influencing citations and which document types are more popular across different platforms. Our results highlight the heterogeneous nature of altmetrics, which encompasses different types of uses and user groups engaging with research on social media. (Author)

  17. [Imaging Mass Spectrometry in Histopathologic Analysis].

    Science.gov (United States)

    Yamazaki, Fumiyoshi; Seto, Mitsutoshi

    2015-04-01

    Matrix-assisted laser desorption/ionization (MALDI)-imaging mass spectrometry (IMS) enables visualization of the distribution of a range of biomolecules by integrating biochemical information from mass spectrometry with positional information from microscopy. IMS identifies a target molecule. In addition, IMS enables global analysis of biomolecules containing unknown molecules by detecting the ratio of the molecular weight to electric charge without any target, which makes it possible to identify novel molecules. IMS generates data on the distribution of lipids and small molecules in tissues, which is difficult to visualize with either conventional counter-staining or immunohistochemistry. In this review, we firstly introduce the principle of imaging mass spectrometry and recent advances in the sample preparation method. Secondly, we present findings regarding biological samples, especially pathological ones. Finally, we discuss the limitations and problems of the IMS technique and clinical application, such as in drug development.

  18. Representation of Social History Factors Across Age Groups: A Topic Analysis of Free-Text Social Documentation.

    Science.gov (United States)

    Lindemann, Elizabeth A; Chen, Elizabeth S; Wang, Yan; Skube, Steven J; Melton, Genevieve B

    2017-01-01

    As individuals age, there is potential for dramatic changes in the social and behavioral determinants that affect health status and outcomes. The importance of these determinants has been increasingly recognized in clinical decision-making. We sought to characterize how social and behavioral health determinants vary in different demographic groups using a previously established schema of 28 social history types through both manual analysis and automated topic analysis of social documentation in the electronic health record across the population of an entire integrated healthcare system. Our manual analysis generated 8,335 annotations over 1,400 documents, representing 24 (86%) social history types. In contrast, automated topic analysis generated 22 (79%) social history types. A comparative evaluation demonstrated both similarities and differences in coverage between the manual and topic analyses. Our findings validate the widespread nature of social and behavioral determinants that affect health status over populations of individuals over their lifespan.

  19. Machine Learning Interface for Medical Image Analysis.

    Science.gov (United States)

    Zhang, Yi C; Kagen, Alexander C

    2017-10-01

    TensorFlow is a second-generation open-source machine learning software library with a built-in framework for implementing neural networks in wide variety of perceptual tasks. Although TensorFlow usage is well established with computer vision datasets, the TensorFlow interface with DICOM formats for medical imaging remains to be established. Our goal is to extend the TensorFlow API to accept raw DICOM images as input; 1513 DaTscan DICOM images were obtained from the Parkinson's Progression Markers Initiative (PPMI) database. DICOM pixel intensities were extracted and shaped into tensors, or n-dimensional arrays, to populate the training, validation, and test input datasets for machine learning. A simple neural network was constructed in TensorFlow to classify images into normal or Parkinson's disease groups. Training was executed over 1000 iterations for each cross-validation set. The gradient descent optimization and Adagrad optimization algorithms were used to minimize cross-entropy between the predicted and ground-truth labels. Cross-validation was performed ten times to produce a mean accuracy of 0.938 ± 0.047 (95 % CI 0.908-0.967). The mean sensitivity was 0.974 ± 0.043 (95 % CI 0.947-1.00) and mean specificity was 0.822 ± 0.207 (95 % CI 0.694-0.950). We extended the TensorFlow API to enable DICOM compatibility in the context of DaTscan image analysis. We implemented a neural network classifier that produces diagnostic accuracies on par with excellent results from previous machine learning models. These results indicate the potential role of TensorFlow as a useful adjunct diagnostic tool in the clinical setting.

  20. Integrated system for automated financial document processing

    Science.gov (United States)

    Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai

    1997-02-01

    A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.

  1. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    Science.gov (United States)

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid

  2. Phase Image Analysis in Conduction Disturbance Patients

    International Nuclear Information System (INIS)

    Kwark, Byeng Su; Choi, Si Wan; Kang, Seung Sik; Park, Ki Nam; Lee, Kang Wook; Jeon, Eun Seok; Park, Chong Hun

    1994-01-01

    It is known that the normal His-Purkinje system provides for nearly synchronous activation of right (RV) and left (LV) ventricles. When His-Purkinje conduction is abnormal, the resulting sequence of ventricular contraction must be correspondingly abnormal. These abnormal mechanical consequences were difficult to demonstrate because of the complexity and the rapidity of its events. To determine the relationship of the phase changes and the abnormalities of ventricular conduction, we performed phase image analysis of Tc-RBC gated blood pool scintigrams in patients with intraventricular conduction disturbances (24 complete left bundle branch block (C-LBBB), 15 complete right bundle branch block (C-RBBB), 13 Wolff-Parkinson-White syndrome (WPW), 10 controls). The results were as follows; 1) The ejection fraction (EF), peak ejection rate (PER), and peak filling rate (PFR) of LV in gated blood pool scintigraphy (GBPS) were significantly lower in patients with C-LBBB than in controls (44.4 ± 13.9% vs 69.9 ± 4.2%, 2.48 ± 0.98 vs 3.51 ± 0,62, 1.76 ± 0.71 vs 3.38 ± 0.92, respectively, p<0.05). 2) In the phase angle analysis of LV, Standard deviation (SD), width of half maximum of phase angle (FWHM), and range of phase angle were significantly increased in patients with C-LBBB than in controls (20.6 + 18.1 vs S.6 + I.8, 22. 5 + 9.2 vs 16.0 + 3.9, 95.7 + 31.7 vs 51.3 + 5.4, respectively, p<0.05). 3) There was no significant difference in EF, PER, PFR between patients with the WolffParkinson-White syndrome and controls. 4) Standard deviation and range of phase angle were significantly higher in patients with WPW syndrome than in controls (10.6 + 2.6 vs 8.6 + 1.8, p<0.05, 69.8 + 11.7 vs 51.3 + 5 4, p<0.001, respectively), however, there was no difference between the two groups in full width of half maximum. 5) Phase image analysis revealed relatively uniform phase across the both ventriles in patients with normal conduction, but markedly delayed phase in the left ventricle

  3. Phase Image Analysis in Conduction Disturbance Patients

    Energy Technology Data Exchange (ETDEWEB)

    Kwark, Byeng Su; Choi, Si Wan; Kang, Seung Sik; Park, Ki Nam; Lee, Kang Wook; Jeon, Eun Seok; Park, Chong Hun [Chung Nam University Hospital, Daejeon (Korea, Republic of)

    1994-03-15

    It is known that the normal His-Purkinje system provides for nearly synchronous activation of right (RV) and left (LV) ventricles. When His-Purkinje conduction is abnormal, the resulting sequence of ventricular contraction must be correspondingly abnormal. These abnormal mechanical consequences were difficult to demonstrate because of the complexity and the rapidity of its events. To determine the relationship of the phase changes and the abnormalities of ventricular conduction, we performed phase image analysis of Tc-RBC gated blood pool scintigrams in patients with intraventricular conduction disturbances (24 complete left bundle branch block (C-LBBB), 15 complete right bundle branch block (C-RBBB), 13 Wolff-Parkinson-White syndrome (WPW), 10 controls). The results were as follows; 1) The ejection fraction (EF), peak ejection rate (PER), and peak filling rate (PFR) of LV in gated blood pool scintigraphy (GBPS) were significantly lower in patients with C-LBBB than in controls (44.4 +- 13.9% vs 69.9 +- 4.2%, 2.48 +- 0.98 vs 3.51 +- 0,62, 1.76 +- 0.71 vs 3.38 +- 0.92, respectively, p<0.05). 2) In the phase angle analysis of LV, Standard deviation (SD), width of half maximum of phase angle (FWHM), and range of phase angle were significantly increased in patients with C-LBBB than in controls (20.6 + 18.1 vs S.6 + I.8, 22. 5 + 9.2 vs 16.0 + 3.9, 95.7 + 31.7 vs 51.3 + 5.4, respectively, p<0.05). 3) There was no significant difference in EF, PER, PFR between patients with the WolffParkinson-White syndrome and controls. 4) Standard deviation and range of phase angle were significantly higher in patients with WPW syndrome than in controls (10.6 + 2.6 vs 8.6 + 1.8, p<0.05, 69.8 + 11.7 vs 51.3 + 5 4, p<0.001, respectively), however, there was no difference between the two groups in full width of half maximum. 5) Phase image analysis revealed relatively uniform phase across the both ventriles in patients with normal conduction, but markedly delayed phase in the left ventricle

  4. A Framework for Requirement Elicitation, Analysis, Documentation and Prioritisation under Uncertainty

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Mladenov, V.

    2015-01-01

    This paper offers a pluralistic framework for coping with requirements in the early phases of design where there is lack of knowledge about a system, its architect and functions. The framework is used to elicit, analyze, document and prioritize the requirements. It embeds probabilistic approach and

  5. Documenting the biodiversity of the Madrean Archipelago: An analysis of a virtual flora and fauna

    Science.gov (United States)

    Nicholas S. Deyo; Thomas R. Van Devender; Alex Smith; Edward. Gilbert

    2013-01-01

    The Madrean Archipelago Biodiversity Assessment (MABA) of Sky Island Alliance is an ambitious project to document the distributions of all species of animals and plants in the Madrean Archipelago, focusing particularly on northeastern Sonora and northwestern Chihuahua, Mexico. The information is made available through MABA’s online database (madrean.org). The sources...

  6. Spent nuclear fuel project-criteria document Cold Vacuum Drying Facility phase 2 safety analysis report

    International Nuclear Information System (INIS)

    Garvin, L.J.

    1998-01-01

    The criteria document provides the criteria and guidance for developing the SNF CVDF Phase 2 SAR. This SAR will support the US Department of Energy, Richland Operations Office decision to authorize the procurement, installation, and installation acceptance testing of the CVDF systems

  7. ANALYSIS OF TERRESTRIAL LASER SCANNING AND PHOTOGRAMMETRY DATA FOR DOCUMENTATION OF HISTORICAL ARTIFACTS

    Directory of Open Access Journals (Sweden)

    R. A. Kuçak

    2016-10-01

    Full Text Available Historical artifacts living from the past until today exposed to many destructions non-naturally or naturally. For this reason, The protection and documentation studies of Cultural Heritage to inform the next generations are accelerating day by day in the whole world. The preservation of historical artifacts using advanced 3D measurement technologies becomes an efficient tool for mapping solutions. There are many methods for documentation and restoration of historic structures. In addition to traditional methods such as simple hand measurement and tachometry, terrestrial laser scanning is rapidly becoming one of the most commonly used techniques due to its completeness, accuracy and fastness characteristics. This study evaluates terrestrial laser scanning(TLS technology and photogrammetry for documenting the historical artifacts facade data in 3D Environment. PhotoModeler software developed by Eos System was preferred for Photogrammetric method. Leica HDS 6000 laser scanner developed by Leica Geosystems and Cyclone software which is the laser data evaluation software belonging to the company is preferred for Terrestrial Laser Scanning method. Taking into account the results obtained with this software product is intended to provide a contribution to the studies for the documentation of cultural heritage.

  8. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  9. Analysis of clinical records of dental patients attending Jordan University Hospital: Documentation of drug prescriptions and local anesthetic injections

    Directory of Open Access Journals (Sweden)

    Najla Dar-Odeh

    2008-08-01

    Full Text Available Najla Dar-Odeh1, Soukaina Ryalat1, Mohammad Shayyab1, Osama Abu-Hammad21Department of Oral and Maxillofacial Surgery Oral Medicine and Periodontics, Faculty of Dentistry, University of Jordan, Jordan; 2Department of Prosthetic Dentistry, Faculty of Dentistry, University of Jordan, JordanObjectives: The aim of this study was to analyze clinical records of dental patients attending the Dental Department at the University of Jordan Hospital: a teaching hospital in Jordan. Analysis aimed at determining whether dental specialists properly documented the drug prescriptions and local anesthetic injections given to their patients.Methods: Dental records of the Dental Department at the Jordan University Hospital were reviewed during the period from April 3rd until April 26th 2007 along with the issued prescriptions during that period.Results: A total of 1000 records were reviewed with a total of 53 prescriptions issued during that period. Thirty records documented the prescription by stating the category of the prescribed drug. Only 13 records stated the generic or the trade names of the prescribed drugs. Of these, 5 records contained the full elements of a prescription. As for local anesthetic injections, the term “LA used” was found in 22 records while the names and quantities of the local anesthetics used were documented in only 13 records. Only 5 records documented the full elements of a local anesthetic injection.Conclusion: The essential data of drug prescriptions and local anesthetic injections were poorly documented by the investigated group of dental specialists. It is recommended that the administration of the hospital and the dental department implement clear and firm guidelines for dental practitioners in particular to do the required documentation procedure.Keywords: dental records, documentation, prescriptions, local anesthesia

  10. Analysis of image plane's Illumination in Image-forming System

    International Nuclear Information System (INIS)

    Duan Lihua; Zeng Yan'an; Zhang Nanyangsheng; Wang Zhiguo; Yin Shiliang

    2011-01-01

    In the detection of optical radiation, the detecting accuracy is affected by optic power distribution of the detector's surface to a large extent. In addition, in the image-forming system, the quality of the image is greatly determined by the uniformity of the image's illumination distribution. However, in the practical optical system, affected by the factors such as field of view, false light and off axis and so on, the distribution of the image's illumination tends to be non uniform, so it is necessary to discuss the image plane's illumination in image-forming systems. In order to analyze the characteristics of the image-forming system at a full range, on the basis of photometry, the formulas to calculate the illumination of the imaging plane have been summarized by the numbers. Moreover, the relationship between the horizontal offset of the light source and the illumination of the image has been discussed in detail. After that, the influence of some key factors such as aperture angle, off-axis distance and horizontal offset on illumination of the image has been brought forward. Through numerical simulation, various theoretical curves of those key factors have been given. The results of the numerical simulation show that it is recommended to aggrandize the diameter of the exit pupil to increase the illumination of the image. The angle of view plays a negative role in the illumination distribution of the image, that is, the uniformity of the illumination distribution can be enhanced by compressing the angle of view. Lastly, it is proved that telecentric optical design is an effective way to advance the uniformity of the illumination distribution.

  11. Guidance Document - Provision of Outage Reserve Capacity for Molybdenum-99 Irradiation Services: Methodology and Economic Analysis

    International Nuclear Information System (INIS)

    Peykov, Pavel; Cameron, Ron; Westmacott, Chad

    2013-01-01

    In June 2011, the OECD Nuclear Energy Agency's (NEA) High-level Group on the Security of Supply of Medical Radioisotopes (HLG-MR) released its policy approach for ensuring a long-term secure supply of molybdenum-99 ( 99 Mo) and its decay product technetium-99m (' 99m Tc). This policy approach was developed after two years of extensive examination and analysis of the challenges facing the supply chain, and the provision of a reliable, secure supply of these important medical isotopes. The full policy approach can be found in the OECD/NEA report, The Supply of Medical Radioisotopes: The Path to Reliability (NEA, 2011). One of the key principles in the policy approach relates to the provision of outage reserve capacity (ORC) in the 99 Mo/' 99m Tc supply chain, as defined on page 7: 'Principle 2: Reserve capacity should be sourced and paid for by the supply chain. A common approach should be used to determine the amount of reserve capacity required'. This Principle follows the findings of the OECD/NEA report, The Supply of Medical Radioisotopes: An Economic Study of the Molybdenum-99 Supply Chain (NEA, 2010), which clearly demonstrated the need for excess 99 Mo production capacity, relative to demand, as some reactors may have to be shutdown unexpectedly or for extended periods. The Study also demonstrated that the pricing structure from reactors for 99 Mo irradiation services prior to the 2009-10 supply shortage was not economically sustainable, including the pricing of ORC, with the cost being subsidised by host nations. These nations have indicated a move away from subsidising production, which often benefits foreign nations or foreign companies, and therefore pricing for irradiation services must recover the full cost of production to ensure economic sustainability and a long-term secure supply. Appropriate pricing would also encourage more efficient use of the product, reducing inefficient use of 99 Mo/' 99m Tc would reduce excess production and the associated

  12. Status of High Flux Isotope Reactor (HFIR) post-restart safety analysis and documentation upgrades

    International Nuclear Information System (INIS)

    Cook, D.H.; Radcliff, T.D.; Rothrock, R.B.; Schreiber, R.E.

    1990-01-01

    The High Flux Isotope Reactor (HFIR), an experimental reactor located at the Oak Ridge National Laboratory (ORNL) and operated for the US Department of Energy by Martin Marietta Energy Systems, was shut down in November, 1986 after the discovery of unexpected neutron embrittlement of the reactor vessel. The reactor was restarted in April, 1989, following an extensive review by DOE and ORNL of the HFIR design, safety, operation, maintenance and management, and the implementation of several upgrades to HFIR safety-related hardware, analyses, documents and procedures. This included establishing new operating conditions to provide added margin against pressure vessel failure, as well as the addition, or upgrading, of specific safety-related hardware. This paper summarizes the status of some of the follow-on (post-restart) activities which are currently in progress, and which will result in a comprehensive set of safety analyses and documentation for the HFIR, comparable with current practice in commercial nuclear power plants. 8 refs

  13. Analysis And Comments On The Consultative Document: International Framework For Liquidity Risk Measurement, Standards And Monitoring

    OpenAIRE

    Jacques Prefontaine; Jean Desrochers; Lise Godbout

    2010-01-01

    The market turmoil that began in mid-2007 re-emphasized the importance of liquidity to the functioning of financial markets and the banking sector. In December 2009, the Basel Committee on Banking Supervision (BCBS) of the Bank for International Settlements (BIS) released a consultative document entitled: “International Framework for Liquidity Risk Measurement, Standards and Monitoring”. Interested parties were invited to provide written comments by April 16th 2010. Given our interest in prom...

  14. Electronic document management system analysis report and system plan for the Environmental Restoration Program

    International Nuclear Information System (INIS)

    Frappaolo, C.

    1995-09-01

    Lockheed Martin Energy Systems, Inc. (LMES) has established and maintains Document Management Centers (DMCs) to support Environmental Restoration Program (ER) activities undertaken at three Oak Ridge facilities: Oak Ridge National Laboratory, Oak Ridge K-25 Site, Oak Ridge Y-12 Plant; and two sister sites: Portsmouth Gaseous Diffusion Plant in Portsmouth, Ohio, and Paducah Gaseous Diffusion Plant in Paducah, Kentucky. The role of the DMCs is to receive, store, retrieve, and properly dispose of records. In an effort to make the DMCs run more efficiently and to more proactively manage the records' life cycles from cradle to grave, ER has decided to investigate ways in which Electronic Document Management System (EDMS) technologies can be used to redefine the DMCs and their related processes. Specific goals of this study are tightening control over the ER documents, establishing and enforcing record creation and retention procedures, speeding up access to information, and increasing the accessibility of information. A working pilot of the solution is desired within the next six months. Based on a series of interviews conducted with personnel from each of the DMCs, key management, and individuals representing related projects, it is recommended that ER utilize document management, full-text retrieval, and workflow technologies to improve and automate records management for the ER program. A phased approach to solution implementation is suggested starting with the deployment of an automated storage and retrieval system at Portsmouth. This should be followed with a roll out of the system to the other DMCs, the deployment of a workflow-enabled authoring system at Portsmouth, and a subsequent roll out of this authoring system to the other sites

  15. Difference Image Analysis of Galactic Microlensing. I. Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, C.; Allsman, R. A.; Alves, D.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K. (and others)

    1999-08-20

    This is a preliminary report on the application of Difference Image Analysis (DIA) to Galactic bulge images. The aim of this analysis is to increase the sensitivity to the detection of gravitational microlensing. We discuss how the DIA technique simplifies the process of discovering microlensing events by detecting only objects that have variable flux. We illustrate how the DIA technique is not limited to detection of so-called ''pixel lensing'' events but can also be used to improve photometry for classical microlensing events by removing the effects of blending. We will present a method whereby DIA can be used to reveal the true unblended colors, positions, and light curves of microlensing events. We discuss the need for a technique to obtain the accurate microlensing timescales from blended sources and present a possible solution to this problem using the existing Hubble Space Telescope color-magnitude diagrams of the Galactic bulge and LMC. The use of such a solution with both classical and pixel microlensing searches is discussed. We show that one of the major causes of systematic noise in DIA is differential refraction. A technique for removing this systematic by effectively registering images to a common air mass is presented. Improvements to commonly used image differencing techniques are discussed. (c) 1999 The American Astronomical Society.

  16. Use of cartography in historical seismicity analysis: a reliable tool to better apprehend the contextualization of the historical documents

    Science.gov (United States)

    Thibault, Fradet; Grégory, Quenet; Kevin, Manchuel

    2014-05-01

    Historical studies, including historical seismicity analysis, deal with historical documents. Numerous factors, such as culture, social condition, demography, political situations and opinions or religious ones influence the way the events are transcribed in the archives. As a consequence, it is crucial to contextualize and compare the historical documents reporting on a given event in order to reduce the uncertainties affecting their analysis and interpretation. When studying historical seismic events it is often tricky to have a global view of all the information provided by the historical documents. It is also difficult to extract cross-correlated information from the documents and draw a precise historical context. Use of cartographic and geographic tools in GIS software is the best tool for the synthesis, interpretation and contextualization of the historical material. The main goal is to produce the most complete dataset of available information, in order to take into account all the components of the historical context and consequently improve the macroseismic analysis. The Entre-Deux-Mers earthquake (1759, Iepc= VII-VIII) [SISFRANCE 2013 - EDF-IRSN-BRGM] is well documented but has never benefited from a cross-analysis of historical documents and historical context elements. The map of available intensity data from SISFRANCE highlights a gap in macroseismic information within the estimated epicentral area. The aim of this study is to understand the origin of this gap by making a cartographic compilation of both, archive information and historical context elements. The results support the hypothesis that the lack of documents and macroseismic data in the epicentral area is related to a low human activity rather than low seismic effects in this zone. Topographic features, geographical position, flood hazard, roads and pathways locations, vineyards distribution and the forester coverage, mentioned in the archives and reported on the Cassini's map confirm this

  17. Documents preparation and review

    International Nuclear Information System (INIS)

    1999-01-01

    Ignalina Safety Analysis Group takes active role in assisting regulatory body VATESI to prepare various regulatory documents and reviewing safety reports and other documentation presented by Ignalina NPP in the process of licensing of unit 1. The list of main documents prepared and reviewed is presented

  18. An expert image analysis system for chromosome analysis application

    International Nuclear Information System (INIS)

    Wu, Q.; Suetens, P.; Oosterlinck, A.; Van den Berghe, H.

    1987-01-01

    This paper reports a recent study on applying a knowledge-based system approach as a new attempt to solve the problem of chromosome classification. A theoretical framework of an expert image analysis system is proposed, based on such a study. In this scheme, chromosome classification can be carried out under a hypothesize-and-verify paradigm, by integrating a rule-based component, in which the expertise of chromosome karyotyping is formulated with an existing image analysis system which uses conventional pattern recognition techniques. Results from the existing system can be used to bring in hypotheses, and with the rule-based verification and modification procedures, improvement of the classification performance can be excepted

  19. The Scientific Image in Behavior Analysis.

    Science.gov (United States)

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press.

  20. Etching and image analysis of the microstructure in marble

    DEFF Research Database (Denmark)

    Alm, Ditte; Brix, Susanne; Howe-Rasmussen, Helle

    2005-01-01

    of grains exposed on that surface are measured on the microscope images using image analysis by the program Adobe Photoshop 7.0 with Image Processing Toolkit 4.0. The parameters measured by the program on microscope images of thin sections of two marble types are used for calculation of the coefficient...

  1. DOCUMENTING A COMPLEX MODERN HERITAGE BUILDING USING MULTI IMAGE CLOSE RANGE PHOTOGRAMMETRY AND 3D LASER SCANNED POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    M. L. Vianna Baptista

    2013-07-01

    Full Text Available Integrating different technologies and expertises help fill gaps when optimizing documentation of complex buildings. Described below is the process used in the first part of a restoration project, the architectural survey of Theatre Guaira Cultural Centre in Curitiba, Brazil. To diminish time on fieldwork, the two-person-field-survey team had to juggle, during three days, the continuous artistic activities and performers’ intense schedule. Both technologies (high definition laser scanning and close-range photogrammetry were used to record all details in the least amount of time without disturbing the artists' rehearsals and performances. Laser Scanning was ideal to record the monumental stage structure with all of its existing platforms, light fixtures, scenery walls and curtains. Although scanned with high-definition, parts of the exterior façades were also recorded using Close Range Photogrammetry. Tiny cracks on the marble plaques and mosaic tiles, not visible in the point clouds, were then able to be precisely documented in order to create the exterior façades textures and damages mapping drawings. The combination of technologies and the expertise of service providers, knowing how and what to document, and what to deliver to the client, enabled maximum benefits to the following restoration project.

  2. Application of automatic image analysis in wood science

    Science.gov (United States)

    Charles W. McMillin

    1982-01-01

    In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...

  3. Brain-inspired algorithms for retinal image analysis

    NARCIS (Netherlands)

    ter Haar Romeny, B.M.; Bekkers, E.J.; Zhang, J.; Abbasi-Sureshjani, S.; Huang, F.; Duits, R.; Dasht Bozorg, Behdad; Berendschot, T.T.J.M.; Smit-Ockeloen, I.; Eppenhof, K.A.J.; Feng, J.; Hannink, J.; Schouten, J.; Tong, M.; Wu, H.; van Triest, J.W.; Zhu, S.; Chen, D.; He, W.; Xu, L.; Han, P.; Kang, Y.

    2016-01-01

    Retinal image analysis is a challenging problem due to the precise quantification required and the huge numbers of images produced in screening programs. This paper describes a series of innovative brain-inspired algorithms for automated retinal image analysis, recently developed for the RetinaCheck

  4. From Pixels to Geographic Objects in Remote Sensing Image Analysis

    NARCIS (Netherlands)

    Addink, E.A.; Van Coillie, Frieke M.B.; Jong, Steven M. de

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received

  5. Comprehensive Non-Destructive Conservation Documentation of Lunar Samples Using High-Resolution Image-Based 3D Reconstructions and X-Ray CT Data

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.

    2015-01-01

    Established contemporary conservation methods within the fields of Natural and Cultural Heritage encourage an interdisciplinary approach to preservation of heritage material (both tangible and intangible) that holds "Outstanding Universal Value" for our global community. NASA's lunar samples were acquired from the moon for the primary purpose of intensive scientific investigation. These samples, however, also invoke cultural significance, as evidenced by the millions of people per year that visit lunar displays in museums and heritage centers around the world. Being both scientifically and culturally significant, the lunar samples require a unique conservation approach. Government mandate dictates that NASA's Astromaterials Acquisition and Curation Office develop and maintain protocols for "documentation, preservation, preparation and distribution of samples for research, education and public outreach" for both current and future collections of astromaterials. Documentation, considered the first stage within the conservation methodology, has evolved many new techniques since curation protocols for the lunar samples were first implemented, and the development of new documentation strategies for current and future astromaterials is beneficial to keeping curation protocols up to date. We have developed and tested a comprehensive non-destructive documentation technique using high-resolution image-based 3D reconstruction and X-ray CT (XCT) data in order to create interactive 3D models of lunar samples that would ultimately be served to both researchers and the public. These data enhance preliminary scientific investigations including targeted sample requests, and also provide a new visual platform for the public to experience and interact with the lunar samples. We intend to serve these data as they are acquired on NASA's Astromaterials Acquisistion and Curation website at http://curator.jsc.nasa.gov/. Providing 3D interior and exterior documentation of astromaterial

  6. Sugar Industry and Coronary Heart Disease Research: A Historical Analysis of Internal Industry Documents.

    Science.gov (United States)

    Kearns, Cristin E; Schmidt, Laura A; Glantz, Stanton A

    2016-11-01

    Early warning signals of the coronary heart disease (CHD) risk of sugar (sucrose) emerged in the 1950s. We examined Sugar Research Foundation (SRF) internal documents, historical reports, and statements relevant to early debates about the dietary causes of CHD and assembled findings chronologically into a narrative case study. The SRF sponsored its first CHD research project in 1965, a literature review published in the New England Journal of Medicine, which singled out fat and cholesterol as the dietary causes of CHD and downplayed evidence that sucrose consumption was also a risk factor. The SRF set the review's objective, contributed articles for inclusion, and received drafts. The SRF's funding and role was not disclosed. Together with other recent analyses of sugar industry documents, our findings suggest the industry sponsored a research program in the 1960s and 1970s that successfully cast doubt about the hazards of sucrose while promoting fat as the dietary culprit in CHD. Policymaking committees should consider giving less weight to food industry-funded studies and include mechanistic and animal studies as well as studies appraising the effect of added sugars on multiple CHD biomarkers and disease development.

  7. The architectural plan and the moving image. Audio-visual media for the plan: to document, to present, to diffuse.

    Directory of Open Access Journals (Sweden)

    Maria Letizia Gagliardi

    2009-06-01

    Full Text Available The architecture needs to be communicated and to communicate, that’s why, in every period, the architect has used innovative tools which could “make public” his work. From the convention of the architectural drawing, to the prospective, photography, cinema, computer and tele-vision, architecture’s communication has found in the dynamic image the right tool for the representation of the relation between space, time and human being, a relation which implies contrasts and framings, that is a succession of images. In this article we will identify three different ways of telling architecture through moving images, three narrations that correspond to three different techniques of planning, shooting and post-production: the documentary, the simulation and the tele-vision.

  8. An image scanner for real time analysis of spark chamber images

    International Nuclear Information System (INIS)

    Cesaroni, F.; Penso, G.; Locci, A.M.; Spano, M.A.

    1975-01-01

    The notes describes the semiautomatic scanning system at LNF for the analysis of spark chamber images. From the projection of the images on the scanner table, the trajectory in the real space is reconstructed

  9. Textural features for radar image analysis

    Science.gov (United States)

    Shanmugan, K. S.; Narayanan, V.; Frost, V. S.; Stiles, J. A.; Holtzman, J. C.

    1981-01-01

    Texture is seen as an important spatial feature useful for identifying objects or regions of interest in an image. While textural features have been widely used in analyzing a variety of photographic images, they have not been used in processing radar images. A procedure for extracting a set of textural features for characterizing small areas in radar images is presented, and it is shown that these features can be used in classifying segments of radar images corresponding to different geological formations.

  10. Analysis of RTM extended images for VTI media

    KAUST Repository

    Li, Vladimir; Tsvankin, Ilya; Alkhalifah, Tariq Ali

    2015-01-01

    velocity analysis remain generally valid in the extended image space for complex media. The dependence of RMO on errors in the anisotropy parameters provides essential insights for anisotropic wavefield tomography using extended images.

  11. Direct identification of fungi using image analysis

    DEFF Research Database (Denmark)

    Dørge, Thorsten Carlheim; Carstensen, Jens Michael; Frisvad, Jens Christian

    1999-01-01

    Filamentous fungi have often been characterized, classified or identified with a major emphasis on macromorphological characters, i.e. the size, texture and color of fungal colonies grown on one or more identification media. This approach has been rejcted by several taxonomists because of the sub......Filamentous fungi have often been characterized, classified or identified with a major emphasis on macromorphological characters, i.e. the size, texture and color of fungal colonies grown on one or more identification media. This approach has been rejcted by several taxonomists because...... of the subjectivity in the visual evaluation and quantification (if any)of such characters and the apparent large variability of the features. We present an image analysis approach for objective identification and classification of fungi. The approach is exemplified by several isolates of nine different species...... of the genus Penicillium, known to be very difficult to identify correctly. The fungi were incubated on YES and CYA for one week at 25 C (3 point inoculation) in 9 cm Petri dishes. The cultures are placed under a camera where a digital image of the front of the colonies is acquired under optimal illumination...

  12. Analysis of compatibility of current Czech initial documentation in the area of technical assurance of nuclear safety with the requirements of the EUR document

    International Nuclear Information System (INIS)

    Zdebor, J.; Zdebor, R.; Kratochvil, L.

    2001-11-01

    The publication is structured as follows: Description of existing documentation. General requirements, goals, principles and design principles: Documents being compared; Method of comparison; Results and partial evaluation of comparison of requirements between EUR and Czech regulations (basic goals and safety philosophy; quantitative safety objectives; basic design requirements; extended design requirements; external and internal threats; technical requirements; site conditions); Summary of the comparison of safety requirements. Comparison of requirements for the systems: Requirements for the nuclear reactor unit systems; Barrier systems (fuel system; reactor cooling system; containment system); Remaining systems (control systems; protection systems; coolant makeup and purification system; residual heat removal system; emergency cooling system; power systems); Common technical requirements for systems (technical requirements for systems; internal and external events). (P.A.)

  13. Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling

    International Nuclear Information System (INIS)

    Liehn, J.C.; Hannequin, P.; Valeyre, J.

    1989-01-01

    This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr

  14. Biostatistical analysis of quantitative immunofluorescence microscopy images.

    Science.gov (United States)

    Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C

    2016-12-01

    Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  15. Computerised image analysis of biocrystallograms originating from agricultural products

    DEFF Research Database (Denmark)

    Andersen, Jens-Otto; Henriksen, Christian B.; Laursen, J.

    1999-01-01

    Procedures are presented for computerised image analysis of iocrystallogram images, originating from biocrystallization investigations of agricultural products. The biocrystallization method is based on the crystallographic phenomenon that when adding biological substances, such as plant extracts...... on up to eight parameters indicated strong relationships, with R2 up to 0.98. It is concluded that the procedures were able to discriminate the seven groups of images, and are applicable for biocrystallization investigations of agricultural products. Perspectives for the application of image analysis...

  16. Forensic image analysis - CCTV distortion and artefacts.

    Science.gov (United States)

    Seckiner, Dilan; Mallett, Xanthé; Roux, Claude; Meuwly, Didier; Maynard, Philip

    2018-04-01

    As a result of the worldwide deployment of surveillance cameras, authorities have gained a powerful tool that captures footage of activities of people in public areas. Surveillance cameras allow continuous monitoring of the area and allow footage to be obtained for later use, if a criminal or other act of interest occurs. Following this, a forensic practitioner, or expert witness can be required to analyse the footage of the Person of Interest. The examination ultimately aims at evaluating the strength of evidence at source and activity levels. In this paper, both source and activity levels are inferred from the trace, obtained in the form of CCTV footage. The source level alludes to features observed within the anatomy and gait of an individual, whilst the activity level relates to activity undertaken by the individual within the footage. The strength of evidence depends on the value of the information recorded, where the activity level is robust, yet source level requires further development. It is therefore suggested that the camera and the associated distortions should be assessed first and foremost and, where possible, quantified, to determine the level of each type of distortion present within the footage. A review of the 'forensic image analysis' review is presented here. It will outline the image distortion types and detail the limitations of differing surveillance camera systems. The aim is to highlight various types of distortion present particularly from surveillance footage, as well as address gaps in current literature in relation to assessment of CCTV distortions in tandem with gait analysis. Future work will consider the anatomical assessment from surveillance footage. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Development of knowledge models by linguistic analysis of lexical relationships in technical documents

    International Nuclear Information System (INIS)

    Seguela, Patrick

    2001-01-01

    This research thesis addresses the problem of knowledge acquisition and structuring from technical texts, and the use of this knowledge in the development of models. The author presents the Cameleon method which aims at extracting binary lexical relationships from technical texts by identifying linguistic markers. The relevance of this method is assessed in the case of four different corpuses: a written technical corpus, an oral technical corpus, a corpus of texts of instructions, and a corpus of academic texts. The author reports the development of a model of representation of knowledge of a specific field by using lexical relationships. The method is then applied to develop a model used in document search within a knowledge management system [fr

  18. Standardization Documents

    Science.gov (United States)

    2011-08-01

    Specifications and Standards; Guide Specifications; CIDs; and NGSs . Learn. Perform. Succeed. STANDARDIZATION DOCUMENTS Federal Specifications Commercial...national or international standardization document developed by a private sector association, organization, or technical society that plans ...Maintain lessons learned • Examples: Guidance for application of a technology; Lists of options Learn. Perform. Succeed. DEFENSE HANDBOOK

  19. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  20. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  1. PHOTOGRAMMETRIC ANALYSIS OF HISTORICAL IMAGE REPOSITORIES FOR VIRTUAL RECONSTRUCTION IN THE FIELD OF DIGITAL HUMANITIES

    Directory of Open Access Journals (Sweden)

    F. Maiwald

    2017-02-01

    Full Text Available Historical photographs contain high density of information and are of great importance as sources in humanities research. In addition to the semantic indexing of historical images based on metadata, it is also possible to reconstruct geometric information about the depicted objects or the camera position at the time of the recording by employing photogrammetric methods. The approach presented here is intended to investigate (semi- automated photogrammetric reconstruction methods for heterogeneous collections of historical (city photographs and photographic documentation for the use in the humanities, urban research and history sciences. From a photogrammetric point of view, these images are mostly digitized photographs. For a photogrammetric evaluation, therefore, the characteristics of scanned analog images with mostly unknown camera geometry, missing or minimal object information and low radiometric and geometric resolution have to be considered. In addition, these photographs have not been created specifically for documentation purposes and so the focus of these images is often not on the object to be evaluated. The image repositories must therefore be subjected to a preprocessing analysis of their photogrammetric usability. Investigations are carried out on the basis of a repository containing historical images of the Kronentor ("crown gate" of the Dresden Zwinger. The initial step was to assess the quality and condition of available images determining their appropriateness for generating three-dimensional point clouds from historical photos using a structure-from-motion evaluation (SfM. Then, the generated point clouds were assessed by comparing them with current measurement data of the same object.

  2. Upper Midwest Gap Analysis Program, Image Processing Protocol

    National Research Council Canada - National Science Library

    Lillesand, Thomas

    1998-01-01

    This document presents a series of technical guidelines by which land cover information is being extracted from Landsat Thematic Mapper data as part of the Upper Midwest Gap Analysis Program (UMGAP...

  3. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  4. Machine learning based analysis of cardiovascular images

    NARCIS (Netherlands)

    Wolterink, JM

    2017-01-01

    Cardiovascular diseases (CVDs), including coronary artery disease (CAD) and congenital heart disease (CHD) are the global leading cause of death. Computed tomography (CT) and magnetic resonance imaging (MRI) allow non-invasive imaging of cardiovascular structures. This thesis presents machine

  5. Analysis of Pregerminated Barley Using Hyperspectral Image Analysis

    DEFF Research Database (Denmark)

    Arngren, Morten; Hansen, Per Waaben; Eriksen, Birger

    2011-01-01

    imaging system in a mathematical modeling framework to identify pregerminated barley at an early stage of approximately 12 h of pregermination. Our model only assigns pregermination as the cause for a single kernel’s lack of germination and is unable to identify dormancy, kernel damage etc. The analysis...... is based on more than 750 Rosalina barley kernels being pregerminated at 8 different durations between 0 and 60 h based on the BRF method. Regerminating the kernels reveals a grouping of the pregerminated kernels into three categories: normal, delayed and limited germination. Our model employs a supervised...

  6. Image quality analysis of digital mammographic equipments

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, P.; Pascual, A.; Verdu, G. [Valencia Univ. Politecnica, Chemical and Nuclear Engineering Dept. (Spain); Rodenas, F. [Valencia Univ. Politecnica, Applied Mathematical Dept. (Spain); Campayo, J.M. [Valencia Univ. Hospital Clinico, Servicio de Radiofisica y Proteccion Radiologica (Spain); Villaescusa, J.I. [Hospital Clinico La Fe, Servicio de Proteccion Radiologica, Valencia (Spain)

    2006-07-01

    The image quality assessment of a radiographic phantom image is one of the fundamental points in a complete quality control programme. The good functioning result of all the process must be an image with an appropriate quality to carry out a suitable diagnostic. Nowadays, the digital radiographic equipments are replacing the traditional film-screen equipments and it is necessary to update the parameters to guarantee the quality of the process. Contrast-detail phantoms are applied to digital radiography to study the threshold contrast detail sensitivity at operation conditions of the equipment. The phantom that is studied in this work is C.D.M.A.M. 3.4, which facilitates the evaluation of image contrast and detail resolution. One of the most extended indexes to measure the image quality in an objective way is the Image Quality Figure (I.Q.F.). This parameter is useful to calculate the image quality taking into account the contrast and detail resolution of the image analysed. The contrast-detail curve is useful as a measure of the image quality too, because it is a graphical representation in which the hole thickness and diameter are plotted for each contrast-detail combination detected in the radiographic image of the phantom. It is useful for the comparison of the functioning of different radiographic image systems, for phantom images under the same exposition conditions. The aim of this work is to study the image quality of different images contrast-detail phantom C.D.M.A.M. 3.4, carrying out the automatic detection of the contrast-detail combination and to establish a parameter which characterize in an objective way the mammographic image quality. This is useful to compare images obtained at different digital mammographic equipments to study the functioning of the equipments. (authors)

  7. Image quality analysis of digital mammographic equipments

    International Nuclear Information System (INIS)

    Mayo, P.; Pascual, A.; Verdu, G.; Rodenas, F.; Campayo, J.M.; Villaescusa, J.I.

    2006-01-01

    The image quality assessment of a radiographic phantom image is one of the fundamental points in a complete quality control programme. The good functioning result of all the process must be an image with an appropriate quality to carry out a suitable diagnostic. Nowadays, the digital radiographic equipments are replacing the traditional film-screen equipments and it is necessary to update the parameters to guarantee the quality of the process. Contrast-detail phantoms are applied to digital radiography to study the threshold contrast detail sensitivity at operation conditions of the equipment. The phantom that is studied in this work is C.D.M.A.M. 3.4, which facilitates the evaluation of image contrast and detail resolution. One of the most extended indexes to measure the image quality in an objective way is the Image Quality Figure (I.Q.F.). This parameter is useful to calculate the image quality taking into account the contrast and detail resolution of the image analysed. The contrast-detail curve is useful as a measure of the image quality too, because it is a graphical representation in which the hole thickness and diameter are plotted for each contrast-detail combination detected in the radiographic image of the phantom. It is useful for the comparison of the functioning of different radiographic image systems, for phantom images under the same exposition conditions. The aim of this work is to study the image quality of different images contrast-detail phantom C.D.M.A.M. 3.4, carrying out the automatic detection of the contrast-detail combination and to establish a parameter which characterize in an objective way the mammographic image quality. This is useful to compare images obtained at different digital mammographic equipments to study the functioning of the equipments. (authors)

  8. Image-Based 3d Reconstruction and Analysis for Orthodontia

    Science.gov (United States)

    Knyaz, V. A.

    2012-08-01

    Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

  9. Machine learning approaches in medical image analysis

    DEFF Research Database (Denmark)

    de Bruijne, Marleen

    2016-01-01

    Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols......, learning from weak labels, and interpretation and evaluation of results....

  10. Principal component analysis of psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

  11. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  12. Comparative study of plans for integrated residue management of construction: an analysis documental

    Directory of Open Access Journals (Sweden)

    Jorge Henrique e Silva Júnior

    2014-02-01

    Full Text Available Objetivo: o presente trabalho faz um estudo comparativo dos Planos integrados de quatro cidades, destacando os pontos que estão de acordo com a resolução 307/2002 do CONAMA. Método: Trata-se de uma pesquisa bibliográfica e documental tendo como fontes artigos científicos e os Planos Integrados de Gerenciamento de Resíduos Sólidos da Construção Civil de cinco cidades brasileiras: Curitiba, Cuiabá, Florianópolis, Rio de Janeiro e São Paulo. Resultados: A resolução prevê o Plano Integrado de Gerenciamento de Resíduos da Construção Civil, como instrumento para implementação da gestão dos resíduos da construção civil, que deve ser elaborado pelos municípios. Muitas capitais ainda não elaboraram seus Planos Integrados de Gerenciamento de Resíduos da Construção Civil. Conclusão: O Plano Integrado de Gerenciamento dos Resíduos da Construção Civil é de grande importância, pois esses resíduos trazem inúmeros problemas ambientais e de saúde.

  13. The right to the city and International Urban Agendas: a document analysis.

    Science.gov (United States)

    Andrade, Elisabete Agrela de; Franceschini, Maria Cristina Trousdell

    2017-12-01

    Considering social, economic and demographic issues, living in the city implies inadequate living conditions, social exclusion, inequities and other problems to the population. At the same time, the city is a setting of cultural, social and affective production. As a result, there is a need to reflect on the right to the city and its relationship with promoting the health of its inhabitants. To that effect, urban agendas have been developed to address the city's ambiguity. This paper aims to analyze four of these agendas through the lenses of Health Promotion. A qualitative document review approach was conducted on urban agendas proposed by international organizations and applied to the Brazilian context: Healthy Cities, Sustainable Cities, Smart Cities and Educating Cities. Results indicate some level of effort by the analyzed agendas to assume social participation, intersectoriality and the territory as central to addressing exclusion and inequities. However, more in-depth discussions are required on each of these concepts. We conclude that urban agendas can contribute greatly toward consolidating the right to the city, provided that their underpinning concepts are critically comprehended.

  14. Slavery Service Accounting Practices in Brazil: A Bibliographic and Document Analysis

    Directory of Open Access Journals (Sweden)

    Adriana Rodrigues Silva

    2014-12-01

    Full Text Available This study focuses on the social and economic aspects and institutional relationships that determined a unique pattern of inequality. We aim to examine the particular role of accounting as a practice used to dehumanize an entire class of people. The primary purpose of this study is not to examine slavery's profitability but rather to identify how accounting practices served slavery. A qualitative research method is applied in this study. Regarding technical procedures, this study makes use of bibliographic and documentary sources. For the purpose of this investigation, and in accordance with bibliographic and documentary research methods, we analyze scientific articles, books and documents from the Brazilian National Archive, the Brazilian Historic and Geographic Institute and the Brazilian National Library Foundation. In light of what was discovered through the study's development, we can consider accounting as a tool that is more active than passive and, therefore, as a tool that was used to support the slave regime. In essence, accounting was used to convert a human's qualitative attributes into a limited number of categories (age, gender, race, through which slaves were differentiated and monetized to facilitate commercial trafficking. We believe that accounting practices facilitated slave trading, conversion and exploitation, procedures that completely ignored qualitative and human dimensions of slavery. Opportunities for future studies on accounting in the slave period, as is the case of other oppressive regimes, are infinite, especially in the case of Brazil.

  15. Space station system analysis study. Part 3: Documentation. Volume 2: Technical report. [structural design and construction

    Science.gov (United States)

    1977-01-01

    An analysis of construction operation is presented as well as power system sizing requirements. Mission hardware requirements are reviewed in detail. Space construction base and design configurations are also examined.

  16. Application of decision tree technique to sensitivity analysis for results of radionuclide migration calculations. Research documents

    International Nuclear Information System (INIS)

    Nakajima, Kunihiko; Makino, Hitoshi

    2005-03-01

    Uncertainties are always present in the parameters used for the nuclide migration analysis in the geological disposal system. These uncertainties affect the result of such analyses, e.g., and the identification of dominant nuclides. It is very important to identify the parameters causing the significant impact on the results, and to investigate the influence of identified parameters in order to recognize R and D items with respect to the development of geological disposal system and understanding of the system performance. In our study, the decision tree analysis technique was examined in the sensitivity analysis as a method for investigation of the influences of the parameters and for complement existing sensitivity analysis. As a result, results obtained from Monte Carlo simulation with parameter uncertainties could be distinguished with not only important parameters but also with their quantitative conditions (e.g., ranges of parameter values). Furthermore, information obtained from the decision tree analysis could be used 1) to categorize the results obtained from the nuclide migration analysis for a given parameter set, 2) to show prospective effect of reduction to parameter uncertainties on the results. (author)

  17. Documentation of Hanford Site independent review of the Hanford Waste Vitrification Plant Preliminary Safety Analysis Report

    International Nuclear Information System (INIS)

    Herborn, D.I.

    1993-11-01

    Westinghouse Hanford Company (WHC) is the Integrating Contractor for the Hanford Waste Vitrification Plant (HWVP) Project, and as such is responsible for preparation of the HWVP Preliminary Safety Analysis Report (PSAR). The HWVP PSAR was prepared pursuant to the requirements for safety analyses contained in US Department of Energy (DOE) Orders 4700.1, Project Management System (DOE 1987); 5480.5, Safety of Nuclear Facilities (DOE 1986a); 5481.lB, Safety Analysis and Review System (DOE 1986b) which was superseded by DOE order 5480-23, Nuclear Safety Analysis Reports, for nuclear facilities effective April 30, 1992 (DOE 1992); and 6430.lA, General Design Criteria (DOE 1989). The WHC procedures that, in large part, implement these DOE requirements are contained in WHC-CM-4-46, Nonreactor Facility Safety Analysis Manual. This manual describes the overall WHC safety analysis process in terms of requirements for safety analyses, responsibilities of the various contributing organizations, and required reviews and approvals

  18. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  19. Maury Documentation

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Supporting documentation for the Maury Collection of marine observations. Includes explanations from Maury himself, as well as guides and descriptions by the U.S....

  20. Documentation Service

    International Nuclear Information System (INIS)

    Charnay, J.; Chosson, L.; Croize, M.; Ducloux, A.; Flores, S.; Jarroux, D.; Melka, J.; Morgue, D.; Mottin, C.

    1998-01-01

    This service assures the treatment and diffusion of the scientific information and the management of the scientific production of the institute as well as the secretariat operation for the groups and services of the institute. The report on documentation-library section mentions: the management of the documentation funds, search in international databases (INIS, Current Contents, Inspects), Pret-Inter service which allows accessing documents through DEMOCRITE network of IN2P3. As realizations also mentioned are: the setup of a video, photo database, the Web home page of the institute's library, follow-up of digitizing the document funds by integrating the CD-ROMs and diskettes, electronic archiving of the scientific production, etc

  1. Towards automatic quantitative analysis of cardiac MR perfusion images

    NARCIS (Netherlands)

    Breeuwer, M.; Quist, M.; Spreeuwers, Lieuwe Jan; Paetsch, I.; Al-Saadi, N.; Nagel, E.

    2001-01-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and reliable automatic image analysis methods. This paper focuses on the automatic evaluation of

  2. Subsurface offset behaviour in velocity analysis with extended reflectivity images

    NARCIS (Netherlands)

    Mulder, W.A.

    2013-01-01

    Migration velocity analysis with the constant-density acoustic wave equation can be accomplished by the focusing of extended migration images, obtained by introducing a subsurface shift in the imaging condition. A reflector in a wrong velocity model will show up as a curve in the extended image. In

  3. Visual Analytics Applied to Image Analysis : From Segmentation to Classification

    NARCIS (Netherlands)

    Rauber, Paulo

    2017-01-01

    Image analysis is the field of study concerned with extracting information from images. This field is immensely important for commercial and scientific applications, from identifying people in photographs to recognizing diseases in medical images. The goal behind the work presented in this thesis is

  4. Mesh Processing in Medical-Image Analysis-a Tutorial

    DEFF Research Database (Denmark)

    Levine, Joshua A.; Paulsen, Rasmus Reinhold; Zhang, Yongjie

    2012-01-01

    Medical-image analysis requires an understanding of sophisticated scanning modalities, constructing geometric models, building meshes to represent domains, and downstream biological applications. These four steps form an image-to-mesh pipeline. For research in this field to progress, the imaging...

  5. Intrasubject registration for change analysis in medical imaging

    NARCIS (Netherlands)

    Staring, M.

    2008-01-01

    Image matching is important for the comparison of medical images. Comparison is of clinical relevance for the analysis of differences due to changes in the health of a patient. For example, when a disease is imaged at two time points, then one wants to know if it is stable, has regressed, or

  6. Transnational Tobacco Company Interests in Smokeless Tobacco in Europe: Analysis of Internal Industry Documents and Contemporary Industry Materials

    Science.gov (United States)

    Peeters, Silvy; Gilmore, Anna B.

    2013-01-01

    Background European Union (EU) legislation bans the sale of snus, a smokeless tobacco (SLT) which is considerably less harmful than smoking, in all EU countries other than Sweden. To inform the current review of this legislation, this paper aims to explore transnational tobacco company (TTC) interests in SLT and pure nicotine in Europe from the 1970s to the present, comparing them with TTCs' public claims of support for harm reduction. Methods and Results Internal tobacco industry documents (in total 416 documents dating from 1971 to 2009), obtained via searching the online Legacy Tobacco Documents Library, were analysed using a hermeneutic approach. This library comprises documents obtained via litigation in the US and does not include documents from Imperial Tobacco, Japan Tobacco International, or Swedish Match. To help overcome this limitation and provide more recent data, we triangulated our documentary findings with contemporary documentation including TTC investor presentations. The analysis demonstrates that British American Tobacco explored SLT opportunities in Europe from 1971 driven by regulatory threats and health concerns, both likely to impact cigarette sales negatively, and the potential to create a new form of tobacco use among those no longer interested in taking up smoking. Young people were a key target. TTCs did not, however, make SLT investments until 2002, a time when EU cigarette volumes started declining, smoke-free legislation was being introduced, and public health became interested in harm reduction. All TTCs have now invested in snus (and recently in pure nicotine), yet both early and recent snus test markets appear to have failed, and little evidence was found in TTCs' corporate materials that snus is central to their business strategy. Conclusions There is clear evidence that BAT's early interest in introducing SLT in Europe was based on the potential for creating an alternative form of tobacco use in light of declining cigarette sales

  7. Transnational tobacco company interests in smokeless tobacco in Europe: analysis of internal industry documents and contemporary industry materials.

    Directory of Open Access Journals (Sweden)

    Silvy Peeters

    Full Text Available European Union (EU legislation bans the sale of snus, a smokeless tobacco (SLT which is considerably less harmful than smoking, in all EU countries other than Sweden. To inform the current review of this legislation, this paper aims to explore transnational tobacco company (TTC interests in SLT and pure nicotine in Europe from the 1970s to the present, comparing them with TTCs' public claims of support for harm reduction.Internal tobacco industry documents (in total 416 documents dating from 1971 to 2009, obtained via searching the online Legacy Tobacco Documents Library, were analysed using a hermeneutic approach. This library comprises documents obtained via litigation in the US and does not include documents from Imperial Tobacco, Japan Tobacco International, or Swedish Match. To help overcome this limitation and provide more recent data, we triangulated our documentary findings with contemporary documentation including TTC investor presentations. The analysis demonstrates that British American Tobacco explored SLT opportunities in Europe from 1971 driven by regulatory threats and health concerns, both likely to impact cigarette sales negatively, and the potential to create a new form of tobacco use among those no longer interested in taking up smoking. Young people were a key target. TTCs did not, however, make SLT investments until 2002, a time when EU cigarette volumes started declining, smoke-free legislation was being introduced, and public health became interested in harm reduction. All TTCs have now invested in snus (and recently in pure nicotine, yet both early and recent snus test markets appear to have failed, and little evidence was found in TTCs' corporate materials that snus is central to their business strategy.There is clear evidence that BAT's early interest in introducing SLT in Europe was based on the potential for creating an alternative form of tobacco use in light of declining cigarette sales and social restrictions on

  8. Transnational tobacco company interests in smokeless tobacco in Europe: analysis of internal industry documents and contemporary industry materials.

    Science.gov (United States)

    Peeters, Silvy; Gilmore, Anna B

    2013-01-01

    European Union (EU) legislation bans the sale of snus, a smokeless tobacco (SLT) which is considerably less harmful than smoking, in all EU countries other than Sweden. To inform the current review of this legislation, this paper aims to explore transnational tobacco company (TTC) interests in SLT and pure nicotine in Europe from the 1970s to the present, comparing them with TTCs' public claims of support for harm reduction. Internal tobacco industry documents (in total 416 documents dating from 1971 to 2009), obtained via searching the online Legacy Tobacco Documents Library, were analysed using a hermeneutic approach. This library comprises documents obtained via litigation in the US and does not include documents from Imperial Tobacco, Japan Tobacco International, or Swedish Match. To help overcome this limitation and provide more recent data, we triangulated our documentary findings with contemporary documentation including TTC investor presentations. The analysis demonstrates that British American Tobacco explored SLT opportunities in Europe from 1971 driven by regulatory threats and health concerns, both likely to impact cigarette sales negatively, and the potential to create a new form of tobacco use among those no longer interested in taking up smoking. Young people were a key target. TTCs did not, however, make SLT investments until 2002, a time when EU cigarette volumes started declining, smoke-free legislation was being introduced, and public health became interested in harm reduction. All TTCs have now invested in snus (and recently in pure nicotine), yet both early and recent snus test markets appear to have failed, and little evidence was found in TTCs' corporate materials that snus is central to their business strategy. There is clear evidence that BAT's early interest in introducing SLT in Europe was based on the potential for creating an alternative form of tobacco use in light of declining cigarette sales and social restrictions on smoking, with

  9. Probabilistic risk assessment course documentation. Volume 3. System reliability and analysis techniques, Session A - reliability

    International Nuclear Information System (INIS)

    Lofgren, E.V.

    1985-08-01

    This course in System Reliability and Analysis Techniques focuses on the quantitative estimation of reliability at the systems level. Various methods are reviewed, but the structure provided by the fault tree method is used as the basis for system reliability estimates. The principles of fault tree analysis are briefly reviewed. Contributors to system unreliability and unavailability are reviewed, models are given for quantitative evaluation, and the requirements for both generic and plant-specific data are discussed. Also covered are issues of quantifying component faults that relate to the systems context in which the components are embedded. All reliability terms are carefully defined. 44 figs., 22 tabs

  10. On the Feature Selection and Classification Based on Information Gain for Document Sentiment Analysis

    Directory of Open Access Journals (Sweden)

    Asriyanti Indah Pratiwi

    2018-01-01

    Full Text Available Sentiment analysis in a movie review is the needs of today lifestyle. Unfortunately, enormous features make the sentiment of analysis slow and less sensitive. Finding the optimum feature selection and classification is still a challenge. In order to handle an enormous number of features and provide better sentiment classification, an information-based feature selection and classification are proposed. The proposed method reduces more than 90% unnecessary features while the proposed classification scheme achieves 96% accuracy of sentiment classification. From the experimental results, it can be concluded that the combination of proposed feature selection and classification achieves the best performance so far.

  11. Development of a traceability analysis method based on case grammar for NPP requirement documents written in Korean language

    International Nuclear Information System (INIS)

    Yoo, Yeong Jae; Seong, Poong Hyun; Kim, Man Cheol

    2004-01-01

    Software inspection is widely believed to be an effective method for software verification and validation (V and V). However, software inspection is labor-intensive and, since it uses little technology, software inspection is viewed upon as unsuitable for a more technology-oriented development environment. Nevertheless, software inspection is gaining in popularity. KAIST Nuclear I and C and Information Engineering Laboratory (NICIEL) has developed software management and inspection support tools, collectively named 'SIS-RT.' SIS-RT is designed to partially automate the software inspection processes. SIS-RT supports the analyses of traceability between a given set of specification documents. To make SIS-RT compatible for documents written in Korean, certain techniques in natural language processing have been studied. Among the techniques considered, case grammar is most suitable for analyses of the Korean language. In this paper, we propose a methodology that uses a case grammar approach to analyze the traceability between documents written in Korean. A discussion regarding some examples of such an analysis will follow

  12. Computerising documentation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    The nuclear power generation industry is faced with public concern and government pressures over safety, efficiency and risk. Operators throughout the industry are addressing these issues with the aid of a new technology - technical document management systems (TDMS). Used for strategic and tactical advantage, the systems enable users to scan, archive, retrieve, store, edit, distribute worldwide and manage the huge volume of documentation (paper drawings, CAD data and film-based information) generated in building, maintaining and ensuring safety in the UK's power plants. The power generation industry has recognized that the management and modification of operation critical information is vital to the safety and efficiency of its power plants. Regulatory pressure from the Nuclear Installations Inspectorate (NII) to operate within strict safety margins or lose Site Licences has prompted the need for accurate, up-to-data documentation. A document capture and management retrieval system provides a powerful cost-effective solution, giving rapid access to documentation in a tightly controlled environment. The computerisation of documents and plans is discussed in this article. (Author)

  13. Context-based coding of bilevel images enhanced by digital straight line analysis

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    , or segmentation maps are also encoded efficiently. The algorithm is not targeted at document images with text, which can be coded efficiently with dictionary-based techniques as in JBIG2. The scheme is based on a local analysis of the digital straightness of the causal part of the object boundary, which is used...... in the context definition for arithmetic encoding. Tested on individual images of standard TV resolution binary shapes and the binary layers of a digital map, the proposed algorithm outperforms PWC, JBIG, JBIG2, and MPEG-4 CAE. On the binary shapes, the code lengths are reduced by 21%, 27 %, 28 %, and 41...

  14. Image quality preferences among radiographers and radiologists. A conjoint analysis

    International Nuclear Information System (INIS)

    Ween, Borgny; Kristoffersen, Doris Tove; Hamilton, Glenys A.; Olsen, Dag Rune

    2005-01-01

    Purpose: The aim of this study was to investigate the image quality preferences among radiographers and radiologists. The radiographers' preferences are mainly related to technical parameters, whereas radiologists assess image quality based on diagnostic value. Methods: A conjoint analysis was undertaken to survey image quality preferences; the study included 37 respondents: 19 radiographers and 18 radiologists. Digital urograms were post-processed into 8 images with different properties of image quality for 3 different patients. The respondents were asked to rank the images according to their personally perceived subjective image quality. Results: Nearly half of the radiographers and radiologists were consistent in their ranking of the image characterised as 'very best image quality'. The analysis showed, moreover, that chosen filtration level and image intensity were responsible for 72% and 28% of the preferences, respectively. The corresponding figures for each of the two professions were 76% and 24% for the radiographers, and 68% and 32% for the radiologists. In addition, there were larger variations in image preferences among the radiologists, as compared to the radiographers. Conclusions: Radiographers revealed a more consistent preference than the radiologists with respect to image quality. There is a potential for image quality improvement by developing sets of image property criteria

  15. Data warehousing as a basis for web-based documentation of data mining and analysis.

    Science.gov (United States)

    Karlsson, J; Eklund, P; Hallgren, C G; Sjödin, J G

    1999-01-01

    In this paper we present a case study for data warehousing intended to support data mining and analysis. We also describe a prototype for data retrieval. Further we discuss some technical issues related to a particular choice of a patient record environment.

  16. Using the Front Page of "The Wall Street Journal" to Teach Document Design and Audience Analysis.

    Science.gov (United States)

    Moore, Patrick

    1989-01-01

    Explains an assignment for the audience analysis segment of a business writing course which compares the front page design of "The Wall Street Journal" with that of a local daily newspaper in order to emphasize the use of design devices in effectively writing to busy people. (SR)

  17. Ecodriver. D23.2: Simulation and analysis document for on-line vehicle algorithms

    NARCIS (Netherlands)

    Seewald, P.; Orfila, O.; Saintpierre, G.

    2014-01-01

    This deliverable reports on the simulations and analysis of the on-line vehicle algorithms as well as the ecoDriver Android application. The simulation and field test results give an impression of how the algorithms will perform in the real world trials in SP3. Thus, it is possible to apply

  18. Experiences and Outcomes of Preschool Physical Education: An Analysis of Developmental Discourses in Scottish Curricular Documentation

    Science.gov (United States)

    McEvilly, Nollaig

    2014-01-01

    This article provides an analysis of developmental discourses underpinning preschool physical education in Scotland's Curriculum for Excellence. Implementing a post-structural perspective, the article examines the preschool experiences and outcomes related to physical education as presented in the Curriculum for Excellence "health and…

  19. Physical Education for Health and Wellbeing: A Discourse Analysis of Scottish Physical Education Curricular Documentation

    Science.gov (United States)

    McEvilly, Nollaig; Verheul, Martine; Atencio, Matthew; Jess, Mike

    2014-01-01

    This paper provides an analysis of the discourses associated with physical education in Scotland's "Curriculum for Excellence". We implement a poststructural perspective in order to identify the discourses that underpin the physical education sections of the "Curriculum for Excellence" "health and well-being"…

  20. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  1. Convergence analysis in near-field imaging

    International Nuclear Information System (INIS)

    Bao, Gang; Li, Peijun

    2014-01-01

    This paper is devoted to the mathematical analysis of the direct and inverse modeling of the diffraction by a perfectly conducting grating surface in the near-field regime. It is motivated by our effort to analyze recent significant numerical results, in order to solve a class of inverse rough surface scattering problems in near-field imaging. In a model problem, the diffractive grating surface is assumed to be a small and smooth deformation of a plane surface. On the basis of the variational method, the direct problem is shown to have a unique weak solution. An analytical solution is introduced as a convergent power series in the deformation parameter by using the transformed field and Fourier series expansions. A local uniqueness result is proved for the inverse problem where only a single incident field is needed. On the basis of the analytic solution of the direct problem, an explicit reconstruction formula is presented for recovering the grating surface function with resolution beyond the Rayleigh criterion. Error estimates for the reconstructed grating surface are established with fully revealed dependence on such quantities as the surface deformation parameter, measurement distance, noise level of the scattering data, and regularity of the exact grating surface function. (paper)

  2. IMAGE ANALYSIS FOR MODELLING SHEAR BEHAVIOUR

    Directory of Open Access Journals (Sweden)

    Philippe Lopez

    2011-05-01

    Full Text Available Through laboratory research performed over the past ten years, many of the critical links between fracture characteristics and hydromechanical and mechanical behaviour have been made for individual fractures. One of the remaining challenges at the laboratory scale is to directly link fracture morphology of shear behaviour with changes in stress and shear direction. A series of laboratory experiments were performed on cement mortar replicas of a granite sample with a natural fracture perpendicular to the axis of the core. Results show that there is a strong relationship between the fracture's geometry and its mechanical behaviour under shear stress and the resulting damage. Image analysis, geostatistical, stereological and directional data techniques are applied in combination to experimental data. The results highlight the role of geometric characteristics of the fracture surfaces (surface roughness, size, shape, locations and orientations of asperities to be damaged in shear behaviour. A notable improvement in shear understanding is that shear behaviour is controlled by the apparent dip in the shear direction of elementary facets forming the fracture.

  3. Measure by image analysis of industrial radiographs

    International Nuclear Information System (INIS)

    Brillault, B.

    1988-01-01

    A digital radiographic picture processing system for non destructive testing intends to provide the expert with computer tool, to precisely quantify radiographic images. The author describes the main problems, from the image formation to its characterization. She also insists on the necessity to define a precise process in order to automatize the system. Some examples illustrate the efficiency of digital processing for radiographic images [fr

  4. MORPHOLOGY BY IMAGE ANALYSIS K. Belaroui and M. N Pons ...

    African Journals Online (AJOL)

    31 déc. 2012 ... Keywords: Characterization; particle size; morphology; image analysis; porous media. 1. INTRODUCTION. La puissance de l'analyse d'images comme ... en une image numérique au moyen d'un convertisseur analogique digital (A/D). Les points de l'image sont disposés suivant une grille en réseau carré, ...

  5. PIZZARO: Forensic analysis and restoration of image and video data

    Czech Academy of Sciences Publication Activity Database

    Kamenický, Jan; Bartoš, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozámský, Adam; Saic, Stanislav; Šroubek, Filip; Šorel, Michal; Zita, Aleš; Zitová, Barbara; Šíma, Z.; Švarc, P.; Hořínek, J.

    2016-01-01

    Roč. 264, č. 1 (2016), s. 153-166 ISSN 0379-0738 R&D Projects: GA MV VG20102013064; GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Image forensic analysis * Image restoration * Image tampering detection * Image source identification Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/kamenicky-0459504.pdf

  6. Technical Requirements Analysis and Control Systems (TRACS) Initial Operating Capability (IOC) documentation

    Science.gov (United States)

    Hammond, Dana P.

    1991-01-01

    The Technical Requirements Analysis and Control Systems (TRACS) software package is described. TRACS offers supplemental tools for the analysis, control, and interchange of project requirements. This package provides the fundamental capability to analyze and control requirements, serves a focal point for project requirements, and integrates a system that supports efficient and consistent operations. TRACS uses relational data base technology (ORACLE) in a stand alone or in a distributed environment that can be used to coordinate the activities required to support a project through its entire life cycle. TRACS uses a set of keyword and mouse driven screens (HyperCard) which imposes adherence through a controlled user interface. The user interface provides an interactive capability to interrogate the data base and to display or print project requirement information. TRACS has a limited report capability, but can be extended with PostScript conventions.

  7. New approaches in intelligent image analysis techniques, methodologies and applications

    CERN Document Server

    Nakamatsu, Kazumi

    2016-01-01

    This book presents an Introduction and 11 independent chapters, which are devoted to various new approaches of intelligent image processing and analysis. The book also presents new methods, algorithms and applied systems for intelligent image processing, on the following basic topics: Methods for Hierarchical Image Decomposition; Intelligent Digital Signal Processing and Feature Extraction; Data Clustering and Visualization via Echo State Networks; Clustering of Natural Images in Automatic Image Annotation Systems; Control System for Remote Sensing Image Processing; Tissue Segmentation of MR Brain Images Sequence; Kidney Cysts Segmentation in CT Images; Audio Visual Attention Models in Mobile Robots Navigation; Local Adaptive Image Processing; Learning Techniques for Intelligent Access Control; Resolution Improvement in Acoustic Maps. Each chapter is self-contained with its own references. Some of the chapters are devoted to the theoretical aspects while the others are presenting the practical aspects and the...

  8. The SocioEconomic Analysis of Repository Siting (SEARS): Technical documentation: Volume 3, Schematic flowcharts

    International Nuclear Information System (INIS)

    Kiel, B.; Parpia, B.; Murdock, S.; Hamm, R.; Fannin, D.; Ransom-Nelson, W.; Leistritz, F.L.

    1985-05-01

    This report presents a summary of results of an assessment of the adaptability, sensitivity, and accuracy of the SEARS model. Specifically, after describing the methods used in the analysis, the report presents a discussion of each aspect of the model evaluation process. The adaptation of the system to three states (Louisiana, Wyoming, and Texas) is described in light of the three key aspects of the model. Volume 3 presents 276 flowcharts for the major functions of the SEARS model

  9. Probabilistic risk assessment course documentation. Volume 5. System reliability and analysis techniques Session D - quantification

    International Nuclear Information System (INIS)

    Lofgren, E.V.

    1985-08-01

    This course in System Reliability and Analysis Techniques focuses on the probabilistic quantification of accident sequences and the link between accident sequences and consequences. Other sessions in this series focus on the quantification of system reliability and the development of event trees and fault trees. This course takes the viewpoint that event tree sequences or combinations of system failures and success are available and that Boolean equations for system fault trees have been developed and are available. 93 figs., 11 tabs

  10. Engineering design, stress and thermal analysis, and documentation for SATS program

    Science.gov (United States)

    1973-01-01

    An in-depth analysis and mechanical design of the solar array stowage and deployment arrangements for use in Small Applications Technology Satellite spacecraft is presented. Alternate approaches for the major elements of work are developed and evaluated. Elements include array stowage and deployment arrangements, the spacecraft and array behavior in the spacecraft despin mode, and the design of the main hinge and segment hinge assemblies. Feasibility calculations are performed and the preferred approach is identified.

  11. The advanced scenario analysis for performance assessment of geological disposal. Pt. 3. Main document

    International Nuclear Information System (INIS)

    Ohkubo, Hiroo

    2004-02-01

    In 'H12 Project to Establish Technical Basis for HLW Disposal in Japan' an approach that is based on an international consensus was adopted to develop scenarios to be considered in performance assessment. Adequacy of the approach was, in general term, appreciated through the peer review. However it was also suggested that there are issues related to improving transparency and traceability of the procedure. Therefore, in the current financial year, in the first place a scenario development methodology was constructed taking into account the requirements identified last year. Furthermore a practical work-frame was developed to support the activities related to the scenario development. This work-frame was applied to an example scenario to check its applicability and identify issues for further research. Secondly, scenario analysis method with regard to perturbation scenario has been studied. First of all, a survey of perturbation scenario discussed in different countries has been carried out and its assessment has been examined. Especially, in Japan, technical information has been classified in order to assess three scenarios, which are seismic activity, faulting and igneous activity. Then, on the basis of assumed occurrence pattern and influence pattern for each perturbation scenario, variant type that should be considered in this analysis has been identified, and the concept of treatment, modeling data and requirements have been clarified. As a result of these researches, a future direction for advanced scenario analysis on performance assessment has been indicated, as well as associated issues to be discussed have been clarified. (author)

  12. La Documentation photographique

    Directory of Open Access Journals (Sweden)

    Magali Hamm

    2009-03-01

    Full Text Available La Documentation photographique, revue destinée aux enseignants et étudiants en histoire-géographie, place l’image au cœur de sa ligne éditoriale. Afin de suivre les évolutions actuelles de la géographie, la collection propose une iconographie de plus en plus diversifiée : cartes, photographies, mais aussi caricatures, une de journal ou publicité, toutes étant considérées comme un document géographique à part entière. Car l’image peut se faire synthèse ; elle peut au contraire montrer les différentes facettes d’un objet ; souvent elle permet d’incarner des phénomènes géographiques. Associées à d’autres documents, les images aident les enseignants à initier leurs élèves à des raisonnements géographiques complexes. Mais pour apprendre à les lire, il est fondamental de les contextualiser, de les commenter et d’interroger leur rapport au réel.The Documentation photographique, magazine dedicated to teachers and students in History - Geography, places the image at the heart of its editorial line. In order to follow the evolutions of Geography, the collection presents a more and more diversified iconography: maps, photographs, but also drawings or advertisements, all this documents being considered as geographical ones. Because image can be a synthesis; on the contrary it can present the different facets of a same object; often it enables to portray geographical phenomena. Related to other documents, images assist the teachers in the students’ initiation to complex geographical reasoning. But in order to learn how to read them, it is fundamental to contextualize them, comment them and question their relations with reality.

  13. Analysis of engineering drawings and raster map images

    CERN Document Server

    Henderson, Thomas C

    2013-01-01

    Presents up-to-date methods and algorithms for the automated analysis of engineering drawings and digital cartographic maps Discusses automatic engineering drawing and map analysis techniques Covers detailed accounts of the use of unsupervised segmentation algorithms to map images

  14. Analysis of geodetic and legal documentation in the process of expropriation for roads. Krakow case study

    Science.gov (United States)

    Trembecka, Anna

    2013-06-01

    Amendment to the Act on special rules of preparation and implementation of investment in public roads resulted in an accelerated mode of acquisition of land for the development of roads. The decision to authorize the execution of road investment issued on its basis has several effects, i.e. determines the location of a road, approves surveying division, approves construction design and also results in acquisition of a real property by virtue of law by the State Treasury or local government unit, among others. The conducted study revealed that over 3 years, in this mode, the city of Krakow has acquired 31 hectares of land intended for the implementation of road investments. Compensation is determined in separate proceedings based on an appraisal study estimating property value, often at a distant time after the loss of land by the owner. One reason for the lengthy compensation proceedings is challenging the proposed amount of compensation, unregulated legal status of the property as well as imprecise legislation. It is important to properly develop geodetic and legal documentation which accompanies the application for issuance of the decision and is also used in compensation proceedings. Zmiana ustawy o szczególnych zasadach przygotowywania i realizacji inwestycji w zakresie dróg publicznych spowodowała przyspieszony tryb pozyskiwania gruntów przeznaczonych pod budowę dróg. Wydawana na jej podstawie decyzja o zezwoleniu na realizację inwestycji drogowej wywołuje szereg skutków, tj. m.in. ustala lokalizację drogi, zatwierdza podziały geodezyjne, zatwierdza projekt budowlany a także powoduje nabycie nieruchomości z mocy prawa, przez Skarb Państwa lub jednostki samorządu terytorialnego. Przeprowadzone badania wykazały iż w powyższym trybie miasto Kraków nabyło w okresie 3 lat ponad 31 ha gruntów przeznaczonych na realizację inwestycji drogowych. Odszkodowanie ustalane jest w drodze odrębnego postępowania w oparciu o operat szacunkowy okre

  15. ImagePy: an open-source, Python-based and platform-independent software package for boimage analysis.

    Science.gov (United States)

    Wang, Anliang; Yan, Xiaolong; Wei, Zhijun

    2018-04-27

    This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.

  16. Documentation of Hanford Site independent review of the Hanford Waste Vitrification Plant Preliminary Safety Analysis Report

    International Nuclear Information System (INIS)

    Herborn, D.I.

    1991-10-01

    The requirements for Westinghouse Hanford independent review of the Preliminary Safety Analysis Report (PSAR) are contained in Section 1.0, Subsection 4.3 of WCH-CM-4-46. Specifically, this manual requires the following: (1) Formal functional reviews of the HWVP PSAR by the future operating organization (HWVP Operations), and the independent review organizations (HWVP and Environmental Safety Assurance, Environmental Assurance, and Quality Assurance); and (2) Review and approval of the HWVP PSAR by the Tank Waste Disposal (TWD) Subcouncil of the Safety and Environmental Advisory Council (SEAC), which provides independent advice to the Westinghouse Hanford President and executives on matters of safety and environmental protection. 7 refs

  17. ANALYSIS OF SST IMAGES BY WEIGHTED ENSEMBLE TRANSFORM KALMAN FILTER

    OpenAIRE

    Sai , Gorthi; Beyou , Sébastien; Memin , Etienne

    2011-01-01

    International audience; This paper presents a novel, efficient scheme for the analysis of Sea Surface Temperature (SST) ocean images. We consider the estimation of the velocity fields and vorticity values from a sequence of oceanic images. The contribution of this paper lies in proposing a novel, robust and simple approach based onWeighted Ensemble Transform Kalman filter (WETKF) data assimilation technique for the analysis of real SST images, that may contain coast regions or large areas of ...

  18. Clustering document fragments using background color and texture information

    Science.gov (United States)

    Chanda, Sukalpa; Franke, Katrin; Pal, Umapada

    2012-01-01

    Forensic analysis of questioned documents sometimes can be extensively data intensive. A forensic expert might need to analyze a heap of document fragments and in such cases to ensure reliability he/she should focus only on relevant evidences hidden in those document fragments. Relevant document retrieval needs finding of similar document fragments. One notion of obtaining such similar documents could be by using document fragment's physical characteristics like color, texture, etc. In this article we propose an automatic scheme to retrieve similar document fragments based on visual appearance of document paper and texture. Multispectral color characteristics using biologically inspired color differentiation techniques are implemented here. This is done by projecting document color characteristics to Lab color space. Gabor filter-based texture analysis is used to identify document texture. It is desired that document fragments from same source will have similar color and texture. For clustering similar document fragments of our test dataset we use a Self Organizing Map (SOM) of dimension 5×5, where the document color and texture information are used as features. We obtained an encouraging accuracy of 97.17% from 1063 test images.

  19. An historical document analysis of the introduction of the Baby Friendly Hospital Initiative into the Australian setting.

    Science.gov (United States)

    Atchan, Marjorie; Davis, Deborah; Foureur, Maralyn

    2017-02-01

    Breastfeeding has many known benefits yet its support across Australian health systems was suboptimal throughout the 20th Century. The World Health Organization launched a global health promotion strategy to help create a 'breastfeeding culture'. Research on the programme has revealed multiple barriers since implementation. To analyse the sociopolitical challenges associated with implementing a global programme into a national setting via an examination of the influences on the early period of implementation of the Baby Friendly Hospital Initiative in Australia. A focused historical document analysis was attended as part of an instrumental case study. A purposeful sampling strategy obtained a comprehensive sample of public and private documents related to the introduction of the BFHI in Australia. Analysis was informed by a 'documents as commentary' approach to gain insight into individual and collective social practices not otherwise observable. Four major themes were identified: "a breastfeeding culture"; "resource implications"; "ambivalent support for breastfeeding and the BFHI" and "business versus advocacy". "A breastfeeding culture" included several subthemes. No tangible support for breastfeeding generally, or the Baby Friendly Hospital Initiative specifically, was identified. Australian policy did not follow international recommendations. There were no financial or policy incentives for BFHI implementation. Key stakeholders' decisions negatively impacted on the Baby Friendly Hospital Initiative at a crucial time in its implementation in Australia. The potential impact of the programme was not realised, representing a missed opportunity to establish and provide sustainable standardised breastfeeding support to Australian women and their families. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  20. An introduction to diffusion tensor image analysis.

    Science.gov (United States)

    O'Donnell, Lauren J; Westin, Carl-Fredrik

    2011-04-01

    Diffusion tensor magnetic resonance imaging (DTI) is a relatively new technology that is popular for imaging the white matter of the brain. This article provides a basic and broad overview of DTI to enable the reader to develop an intuitive understanding of these types of data, and an awareness of their strengths and weaknesses. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Biomedical Image Analysis: Rapid prototyping with Mathematica

    NARCIS (Netherlands)

    Haar Romenij, ter B.M.; Almsick, van M.A.

    2004-01-01

    Digital acquisition techniques have caused an explosion in the production of medical images, especially with the advent of multi-slice CT and volume MRI. One third of the financial investments in a modern hospital's equipment are dedicated to imaging. Emerging screening programs add to this flood of

  2. Multi-spectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2011-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. In this study multi-spectral image analysis of pellets was performed using LDA, QDA, SNV and PCA on pixel level and mean value of pixels...

  3. Geographic Object-Based Image Analysis: Towards a new paradigm

    NARCIS (Netherlands)

    Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.A.|info:eu-repo/dai/nl/224281216; Queiroz Feitosa, R.; van der Meer, F.D.|info:eu-repo/dai/nl/138940908; van der Werff, H.M.A.; van Coillie, F.; Tiede, A.

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature

  4. Analysis of licensed South African diagnostic imaging equipment ...

    African Journals Online (AJOL)

    Analysis of licensed South African diagnostic imaging equipment. ... Pan African Medical Journal ... Introduction: Objective: To conduct an analysis of all registered South Africa (SA) diagnostic radiology equipment, assess the number of equipment units per capita by imaging modality, and compare SA figures with published ...

  5. MELDOQ - astrophysical image and pattern analysis in medicine: early recognition of malignant melanomas of the skin by digital image analysis. Final report

    International Nuclear Information System (INIS)

    Bunk, W.; Pompl, R.; Morfill, G.; Stolz, W.; Abmayr, W.

    1999-01-01

    Dermatoscopy is at present the most powerful clinical method for early detection of malignant melanomas. However, the application requires a lot of expertise and experience. Therefore, a quantitative image analysis system has been developed in order to assist dermatologists in 'on site diagnosis' and to improve the detection efficiency. Based on a very extensive dataset of dermatoscopic images, recorded in a standardized manner, a number of features for quantitative characterization of complex patterns in melanocytic skin lesions has been developed. The derived classifier improved the detection rate of malignant and benign melanocytic lesions to over 90% (sensitivity =91.5% and specificity =93.4% in the test set), using only six measures. A distinguishing feature of the system is the visualization of the quantified characteristics that are based on the dermatoscopic ABCD-rule. The developed prototype of a dermatoscopic workplace consists of defined procedures for standardized image acquisition and documentation, components of a necessary data pre-processing (e.g. shading- and colour-correction, removal of artefacts), quantification algorithms (evaluating asymmetry properties, border characteristics, the content of colours and structural components) and classification routines. In 2000 an industrial partner will begin marketing the digital imaging system including the specialized software for the early detection of skin cancer, which is suitable for clinicians and practitioners. The primary used nonlinear analysis techniques (e.g. scaling index method and others) can identify and characterize complex patterns in images and have a diagnostic potential in many other applications. (orig.) [de

  6. Repeated attempted homicide by administration of drugs documented by hair analysis.

    Science.gov (United States)

    Baillif-Couniou, Valérie; Bartoli, Christophe; Sastre, Caroline; Chèze, Marjorie; Deveaux, Marc; Léonetti, Georges; Pélissier-Alicot, Anne-Laure

    2018-02-01

    Attempted murder by repeated poisoning is quite rare. The authors describe the case of a 62-year-old man who was admitted to an intensive care unit (ICU) for neurological disturbances complicated by inhalation pneumopathy. He presented a loss of consciousness while his wife was visiting him at the ICU (H0). Forty-eight hours later (H48), police officers apprehended the patient's wife pouring a liquid into his fruit salad at the hospital. Toxicological analyses of a blood sample and the infusion equipment (H0), as well as the fruit salad and its container (H48), confirmed the attempted poisoning with cyamemazine (H0) and hydrochloric acid (H48). In order to evaluate the anteriority of poisonings, hair analysis was requested and the medical records of the 6 previous months were also examined. Two 6-cm brown hair strands were sampled and the victim's medical record was seized in order to determine the treatments he had been given during the previous six months. Segmental hair testing on two 6-cm brown hair was conducted by GC-MS, LC-DAD and LC-MS/MS (0-2/2-4/4-6 cm; pg/mg). Haloperidol (9200/1391/227), amitriptyline (7450/1850/3260), venlafaxine (332/560/260), that had never been part of the victim's treatment were detected, as well as some benzodiazepines (alprazolam, bromazepam, nordazepam); cyamemazine was also detected in all the segments (9960/1610/2367) though only a single dose administration was reported in the medical records. The toxicological analyses performed at H0 and H48 confirmed the homicide attempts in the ICU. In addition, comparison of the results in hair analysis with the medical records confirmed repeated poisoning attempts over the previous six months, and thus explain the origin of the disorders presented by the victim. This case serves to remind us that repeated attempted murder can be difficult to diagnose and that hair analysis can be an effective way to detect such attempts. Copyright © 2018. Published by Elsevier Ltd.

  7. Tracking and Analysis Framework (TAF) model documentation and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Bloyd, C.; Camp, J.; Conzelmann, G. [and others

    1996-12-01

    With passage of the 1990 Clean Air Act Amendments, the United States embarked on a policy for controlling acid deposition that has been estimated to cost at least $2 billion. Title IV of the Act created a major innovation in environmental regulation by introducing market-based incentives - specifically, by allowing electric utility companies to trade allowances to emit sulfur dioxide (SO{sub 2}). The National Acid Precipitation Assessment Program (NAPAP) has been tasked by Congress to assess what Senator Moynihan has termed this {open_quotes}grand experiment.{close_quotes} Such a comprehensive assessment of the economic and environmental effects of this legislation has been a major challenge. To help NAPAP face this challenge, the U.S. Department of Energy (DOE) has sponsored development of an integrated assessment model, known as the Tracking and Analysis Framework (TAF). This section summarizes TAF`s objectives and its overall design.

  8. Analysis of sharpness increase by image noise

    Science.gov (United States)

    Kurihara, Takehito; Aoki, Naokazu; Kobayashi, Hiroyuki

    2009-02-01

    Motivated by the reported increase in sharpness by image noise, we investigated how noise affects sharpness perception. We first used natural images of tree bark with different amounts of noise to see whether noise enhances sharpness. Although the result showed sharpness decreased as noise amount increased, some observers seemed to perceive more sharpness with increasing noise, while the others did not. We next used 1D and 2D uni-frequency patterns as stimuli in an attempt to reduce such variability in the judgment. The result showed, for higher frequency stimuli, sharpness decreased as the noise amount increased, while sharpness of the lower frequency stimuli increased at a certain noise level. From this result, we thought image noise might reduce sharpness at edges, but be able to improve sharpness of lower frequency component or texture in image. To prove this prediction, we experimented again with the natural image used in the first experiment. Stimuli were made by applying noise separately to edge or to texture part of the image. The result showed noise, when added to edge region, only decreased sharpness, whereas when added to texture, could improve sharpness. We think it is the interaction between noise and texture that sharpens image.

  9. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  10. Photoacoustic image reconstruction: a quantitative analysis

    Science.gov (United States)

    Sperl, Jonathan I.; Zell, Karin; Menzenbach, Peter; Haisch, Christoph; Ketzer, Stephan; Marquart, Markus; Koenig, Hartmut; Vogel, Mika W.

    2007-07-01

    Photoacoustic imaging is a promising new way to generate unprecedented contrast in ultrasound diagnostic imaging. It differs from other medical imaging approaches, in that it provides spatially resolved information about optical absorption of targeted tissue structures. Because the data acquisition process deviates from standard clinical ultrasound, choice of the proper image reconstruction method is crucial for successful application of the technique. In the literature, multiple approaches have been advocated, and the purpose of this paper is to compare four reconstruction techniques. Thereby, we focused on resolution limits, stability, reconstruction speed, and SNR. We generated experimental and simulated data and reconstructed images of the pressure distribution using four different methods: delay-and-sum (DnS), circular backprojection (CBP), generalized 2D Hough transform (HTA), and Fourier transform (FTA). All methods were able to depict the point sources properly. DnS and CBP produce blurred images containing typical superposition artifacts. The HTA provides excellent SNR and allows a good point source separation. The FTA is the fastest and shows the best FWHM. In our study, we found the FTA to show the best overall performance. It allows a very fast and theoretically exact reconstruction. Only a hardware-implemented DnS might be faster and enable real-time imaging. A commercial system may also perform several methods to fully utilize the new contrast mechanism and guarantee optimal resolution and fidelity.

  11. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  12. Image Sharing Technologies and Reduction of Imaging Utilization: A Systematic Review and Meta-analysis

    Science.gov (United States)

    Vest, Joshua R.; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B.

    2016-01-01

    Introduction Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Methods Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004–2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. Results A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = −0.17; 95% confidence interval [CI] = [−0.25, −0.09]; P utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Conclusions Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. PMID:26614882

  13. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  14. [Media and drugs: a documental analysis of the Brazilian writing media between 1999 and 2003].

    Science.gov (United States)

    Ronzani, Telmo Mota; Fernandes, Ameli Gabriele Batista; Gebara, Carla Ferreira de Paula; Oliveira, Samia Abreu; Scoralick, Natália Nunes; Lourenço, Lélio Moura

    2009-01-01

    This paper aims to analyze the kind of information published by the Brazilian 'written media' about drugs. It was examined articles about drugs in a national circulation magazine between 1999 and 2003, through an analysis of content. A total of 481 articles were found. 'Consumption' was the most appeared topic. The most quoted drugs were: cocaine (21%), marijuana (19%), alcoholic beverages (12%) and cigarettes (12%). This research also showed that 57% of the articles were related to cigarettes, its harmful effects, whereas alcohol had the same amount of articles showing it as a good or a bad substance for the human being and considered the most addictive drug (23%). On the other hand, cocaine was related to drug dealing (30%). In general, cocaine and marijuana were in focus in the media while alcohol and solvents had less prominence considering the epidemiologic data of use. We can notice that there is an incompatibility between the media focus and the profile of drug consumption in Brazil, that could influence the person's beliefs about certain substances and public politics about drugs in Brazil.

  15. A Comparative Analysis of Information Hiding Techniques for Copyright Protection of Text Documents

    Directory of Open Access Journals (Sweden)

    Milad Taleby Ahvanooey

    2018-01-01

    Full Text Available With the ceaseless usage of web and other online services, it has turned out that copying, sharing, and transmitting digital media over the Internet are amazingly simple. Since the text is one of the main available data sources and most widely used digital media on the Internet, the significant part of websites, books, articles, daily papers, and so on is just the plain text. Therefore, copyrights protection of plain texts is still a remaining issue that must be improved in order to provide proof of ownership and obtain the desired accuracy. During the last decade, digital watermarking and steganography techniques have been used as alternatives to prevent tampering, distortion, and media forgery and also to protect both copyright and authentication. This paper presents a comparative analysis of information hiding techniques, especially on those ones which are focused on modifying the structure and content of digital texts. Herein, various text watermarking and text steganography techniques characteristics are highlighted along with their applications. In addition, various types of attacks are described and their effects are analyzed in order to highlight the advantages and weaknesses of current techniques. Finally, some guidelines and directions are suggested for future works.

  16. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  17. Multifractal analysis of three-dimensional histogram from color images

    International Nuclear Information System (INIS)

    Chauveau, Julien; Rousseau, David; Richard, Paul; Chapeau-Blondeau, Francois

    2010-01-01

    Natural images, especially color or multicomponent images, are complex information-carrying signals. To contribute to the characterization of this complexity, we investigate the possibility of multiscale organization in the colorimetric structure of natural images. This is realized by means of a multifractal analysis applied to the three-dimensional histogram from natural color images. The observed behaviors are confronted to those of reference models with known multifractal properties. We use for this purpose synthetic random images with trivial monofractal behavior, and multidimensional multiplicative cascades known for their actual multifractal behavior. The behaviors observed on natural images exhibit similarities with those of the multifractal multiplicative cascades and display the signature of elaborate multiscale organizations stemming from the histograms of natural color images. This type of characterization of colorimetric properties can be helpful to various tasks of digital image processing, as for instance modeling, classification, indexing.

  18. Knowledge-based image analysis: some aspects on the analysis of images using other types of information

    Energy Technology Data Exchange (ETDEWEB)

    Eklundh, J O

    1982-01-01

    The computer vision approach to image analysis is discussed from two aspects. First, this approach is constrasted to the pattern recognition approach. Second, how external knowledge and information and models from other fields of science and engineering can be used for image and scene analysis is discussed. In particular, the connections between computer vision and computer graphics are pointed out.

  19. QA for test and analysis, documentation, recovery of data after test

    International Nuclear Information System (INIS)

    Zola, Maurizio

    2001-01-01

    Quality assurance for test and analysis implies the following qualification: the generation and maintenance of evidence to ensure that the equipment will operate on demand to meet the system performance requirements. (IEC 780-1984). This is presented through Standards and guidelines. The purpose of ISO 9000:2000, Quality management systems - Fundamentals and vocabulary is to establish a starting point for understanding the standards and defines the fundamental terms and definitions used in the ISO 9000 family. ISO 9001:2000, Quality management systems - Requirements, is the requirement standard to be used to assess ability to meet customer and applicable regulatory requirements and thereby address customer satisfaction. It is now the only standard in the ISO 9000 family against which third-party certification can be carried. ISO 9004:2000, Quality management systems - Guidelines for performance improvements provides guidance for continual improvement of your quality management system to benefit all parties through sustained customer satisfaction. ISO 9001:2000 specifies requirements for a quality management system for any organization that needs to demonstrate its ability to consistently provide product that meets customer and applicable regulatory requirements and aims to enhance customer satisfaction. ISO 9001:2000 is used if you are seeking to establish a management system that provides confidence in the conformance of your product to established or specified requirements. It is suggested that, beginning with ISO 9000:2000, you adopt ISO 9001:2000 to achieve a first level of performance. The practices described in ISO 9004:2000 may then be implemented to make your quality management system increasingly effective in achieving your own business goals. ISO 9004:2000 is used to extend the benefits obtained from ISO 9001:2000 to all parties that are interested. Using standards in this way will enable relation to other management systems

  20. Introducing PLIA: Planetary Laboratory for Image Analysis

    Science.gov (United States)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  1. Applying Image Matching to Video Analysis

    Science.gov (United States)

    2010-09-01

    image groups, classified by the background scene, are the flag, the kitchen, the telephone, the bookshelf , the title screen, the...Kitchen 136 Telephone 3 Bookshelf 81 Title Screen 10 Map 1 24 Map 2 16 command line. This implementation of a Bloom filter uses two arbitrary...with the Bookshelf images. This scene is a much closer shot than the Kitchen scene so the host occupies much of the background. Algorithms for face

  2. OSIRIS-REx Asteroid Sample Return Mission Image Analysis

    Science.gov (United States)

    Chevres Fernandez, Lee Roger; Bos, Brent

    2018-01-01

    NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding

  3. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.

    Science.gov (United States)

    Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos

    2017-11-01

    The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.

  4. Diagnostic imaging analysis of the impacted mesiodens

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Jeong Jun; Choi, Bo Ram; Jeong, Hwan Seok; Huh, Kyung Hoe; Yi, Won Jin; Heo, Min Suk; Lee, Sam Sun; Choi, Soon Chul [School of Dentistry, Seoul National University, Seoul (Korea, Republic of)

    2010-06-15

    The research was performed to predict the three dimensional relationship between the impacted mesiodens and the maxillary central incisors and the proximity with the anatomic structures by comparing their panoramic images with the CT images. Among the patients visiting Seoul National University Dental Hospital from April 2003 to July 2007, those with mesiodens were selected (154 mesiodens of 120 patients). The numbers, shapes, orientation and positional relationship of mesiodens with maxillary central incisors were investigated in the panoramic images. The proximity with the anatomical structures and complications were investigated in the CT images as well. The sex ratio (M : F) was 2.28 : 1 and the mean number of mesiodens per one patient was 1.28. Conical shape was 84.4% and inverted orientation was 51.9%. There were more cases of anatomical structures encroachment, especially on the nasal floor and nasopalatine duct, when the mesiodens was not superimposed with the central incisor. There were, however, many cases of the nasopalatine duct encroachment when the mesiodens was superimpoised with the apical 1/3 of central incisor (52.6%). Delayed eruption (55.6%), crown rotation (66.7%) and crown resorption (100%) were observed when the mesiodens was superimposed with the crown of the central incisor. It is possible to predict three dimensional relationship between the impacted mesiodens and the maxillary central incisors in the panoramic images, but more details should be confirmed by the CT images when necessary.

  5. Synthesis of the IRSN report on its analysis of the safety guidance package (DOrS) of the ASTRID reactor project. Safety guidance document for the ASTRID prototype: Referral to the GPR. Opinion related to the safety guidance document of the ASTRID reactor project. ASTRID prototype: Safety guidance document for the ASTRID prototype

    International Nuclear Information System (INIS)

    Lachaume, Jean-Luc; Niel, Jean-Christophe

    2013-01-01

    A first document indicates the improvement guidelines for the ASTRID project based on the French experience in the field of sodium-cooled fast neutron reactors, addresses the safety objectives as they are presented for the ASTRID project, discusses how the project includes a regulation and design referential, and how it addresses various aspects of the design approach (ranking and analysis of operation situations, defence in depth, use of probabilistic studies, safety classification and qualification to accidental situations, taking internal and external aggressions into account and taking severe accidents into account at the design level). It comments the guidelines related to the first two barriers, to main safety functions (control of reactivity and of reactor cooling, containment of radioactive and toxic materials), to dismantling, to R and D for safety support. A second document is a letter sent by the ASN to the GPR (permanent group of experts in charge of nuclear reactors) about the safety guidance document for the ASTRID prototype. The third document is the answer and contains comments and recommendations by this group about the content of this document, and therefore addresses the same topics as the first document. The last document defines the framework of the approach to this document

  6. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    Science.gov (United States)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  7. Theoretical analysis of radiographic images by nonstationary Poisson processes

    International Nuclear Information System (INIS)

    Tanaka, Kazuo; Uchida, Suguru; Yamada, Isao.

    1980-01-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process. (author)

  8. Automated thermal mapping techniques using chromatic image analysis

    Science.gov (United States)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  9. Quantitative methods for the analysis of electron microscope images

    DEFF Research Database (Denmark)

    Skands, Peter Ulrik Vallø

    1996-01-01

    The topic of this thesis is an general introduction to quantitative methods for the analysis of digital microscope images. The images presented are primarily been acquired from Scanning Electron Microscopes (SEM) and interfermeter microscopes (IFM). The topic is approached though several examples...... foundation of the thesis fall in the areas of: 1) Mathematical Morphology; 2) Distance transforms and applications; and 3) Fractal geometry. Image analysis opens in general the possibility of a quantitative and statistical well founded measurement of digital microscope images. Herein lies also the conditions...

  10. Methods for processing and analysis functional and anatomical brain images: computerized tomography, emission tomography and nuclear resonance imaging

    International Nuclear Information System (INIS)

    Mazoyer, B.M.

    1988-01-01

    The various methods for brain image processing and analysis are presented and compared. The following topics are developed: the physical basis of brain image comparison (nature and formation of signals intrinsic performance of the methods image characteristics); mathematical methods for image processing and analysis (filtering, functional parameter extraction, morphological analysis, robotics and artificial intelligence); methods for anatomical localization (neuro-anatomy atlas, proportional stereotaxic atlas, numerized atlas); methodology of cerebral image superposition (normalization, retiming); image networks [fr

  11. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  12. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  13. 5-ALA induced fluorescent image analysis of actinic keratosis

    Science.gov (United States)

    Cho, Yong-Jin; Bae, Youngwoo; Choi, Eung-Ho; Jung, Byungjo

    2010-02-01

    In this study, we quantitatively analyzed 5-ALA induced fluorescent images of actinic keratosis using digital fluorescent color and hyperspectral imaging modalities. UV-A was utilized to induce fluorescent images and actinic keratosis (AK) lesions were demarcated from surrounding the normal region with different methods. Eight subjects with AK lesion were participated in this study. In the hyperspectral imaging modality, spectral analysis method was utilized for hyperspectral cube image and AK lesions were demarcated from the normal region. Before image acquisition, we designated biopsy position for histopathology of AK lesion and surrounding normal region. Erythema index (E.I.) values on both regions were calculated from the spectral cube data. Image analysis of subjects resulted in two different groups: the first group with the higher fluorescence signal and E.I. on AK lesion than the normal region; the second group with lower fluorescence signal and without big difference in E.I. between two regions. In fluorescent color image analysis of facial AK, E.I. images were calculated on both normal and AK lesions and compared with the results of hyperspectral imaging modality. The results might indicate that the different intensity of fluorescence and E.I. among the subjects with AK might be interpreted as different phases of morphological and metabolic changes of AK lesions.

  14. Rapid analysis and exploration of fluorescence microscopy images.

    Science.gov (United States)

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J

    2014-03-19

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.

  15. Supplement analysis for continued operation of Lawrence Livermore National Laboratory and Sandia National Laboratories, Livermore. Volume 2: Comment response document

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    The US Department of Energy (DOE), prepared a draft Supplement Analysis (SA) for Continued Operation of Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories, Livermore (SNL-L), in accordance with DOE`s requirements for implementation of the National Environmental Policy Act of 1969 (NEPA) (10 Code of Federal Regulations [CFR] Part 1021.314). It considers whether the Final Environmental Impact Statement and Environmental Impact Report for Continued Operation of Lawrence Livermore National Laboratory and Sandia National Laboratories, Livermore (1992 EIS/EIR) should be supplement3ed, whether a new environmental impact statement (EIS) should be prepared, or no further NEPA documentation is required. The SA examines the current project and program plans and proposals for LLNL and SNL-L, operations to identify new or modified projects or operations or new information for the period from 1998 to 2002 that was not considered in the 1992 EIS/EIR. When such changes, modifications, and information are identified, they are examined to determine whether they could be considered substantial or significant in reference to the 1992 proposed action and the 1993 Record of Decision (ROD). DOE released the draft SA to the public to obtain stakeholder comments and to consider those comments in the preparation of the final SA. DOE distributed copies of the draft SA to those who were known to have an interest in LLNL or SNL-L activities in addition to those who requested a copy. In response to comments received, DOE prepared this Comment Response Document.

  16. Forensic Analysis of Blue Ball point Pen Inks on Questioned Documents by High Performance Thin Layer Chromatography Technique (HPTLC)

    International Nuclear Information System (INIS)

    Lee, L.C.; Siti Mariam Nunurung; Abdul Aziz Ishak

    2014-01-01

    Nowadays, crimes related to forged documents are increasing. Any erasure, addition or modification in the document content always involves the use of writing instrument such as ball point pens. Hence, there is an evident need to develop a fast and accurate ink analysis protocol to solve this problem. This study is aimed to determine the discrimination power of high performance thin layer chromatography (HPTLC) technique for analyzing a set of blue ball point pen inks. Ink samples deposited on paper were extracted using methanol and separated via a solvent mixture of ethyl acetate, methanol and distilled water (70: 35: 30, v/ v/ v). In this method, the discrimination power of 89.40 % was achieved, which confirm that the proposed method was able to differentiate a significant number of pen-pair samples. In addition, composition of blue pen inks was found to be homogeneous (RSD < 2.5 %) and the proposed method showed good repeatability and reproducibility (RSD < 3. 0%). As a conclusion, HPTLC is an effective tool to separate blue ball point pen inks. (author)

  17. Research of second harmonic generation images based on texture analysis

    Science.gov (United States)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  18. Uncooled LWIR imaging: applications and market analysis

    Science.gov (United States)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  19. Market Analysis and Consumer Impacts Source Document. Part II. Review of Motor Vehicle Market and Consumer Expenditures on Motor Vehicle Transportation

    Science.gov (United States)

    1980-12-01

    This source document on motor vehicle market analysis and consumer impacts consists of three parts. Part II consists of studies and review on: motor vehicle sales trends; motor vehicle fleet life and fleet composition; car buying patterns of the busi...

  20. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted.   CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat a...