WorldWideScience

Sample records for document image analysis

  1. Document image analysis: A primer

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Abstract. Document image analysis refers to algorithms and techniques that are applied to images of documents to obtain a computer-readable description from pixel data. A well-known document image analysis product is the Optical Character. Recognition (OCR) software that recognizes characters in a scanned document ...

  2. Document image analysis: A primer

    Indian Academy of Sciences (India)

    OCR makes it possible for the user to edit or search the document's contents. In this paper we briefly describe various components of a document analysis system. Many of these basic building blocks are found in most document analysis systems, irrespective of the particular domain or language to which they are applied.

  3. Document image analysis: A primer

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    different computers and software such that even their electronic formats are incompatible. Some include both formatted text and tables as ... entries into electronic documents. Archives of paper documents in ..... is very unlikely that qveen would occur in the English language dictionary. Furthermore, one could use knowledge ...

  4. Near infrared hyperspectral imaging for forensic analysis of document forgery.

    Science.gov (United States)

    Silva, Carolina S; Pimentel, Maria Fernanda; Honorato, Ricardo S; Pasquini, Celio; Prats-Montalbán, José M; Ferrer, Alberto

    2014-10-21

    Hyperspectral images in the near infrared range (HSI-NIR) were evaluated as a nondestructive method to detect fraud in documents. Three different types of typical forgeries were simulated by (a) obliterating text, (b) adding text and (c) approaching the crossing lines problem. The simulated samples were imaged in the range of 928-2524 nm with spectral and spatial resolutions of 6.3 nm and 10 μm, respectively. After data pre-processing, different chemometric techniques were evaluated for each type of forgery. Principal component analysis (PCA) was performed to elucidate the first two types of adulteration, (a) and (b). Moreover, Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) was used in an attempt to improve the results of the type (a) obliteration and type (b) adding text problems. Finally, MCR-ALS and Partial Least Squares-Discriminant Analysis (PLS-DA), employed as a variable selection tool, were used to study the type (c) forgeries, i.e. crossing lines problem. Type (a) forgeries (obliterating text) were successfully identified in 43% of the samples using both the chemometric methods (PCA and MCR-ALS). Type (b) forgeries (adding text) were successfully identified in 82% of the samples using both the methods (PCA and MCR-ALS). Finally, type (c) forgeries (crossing lines) were successfully identified in 85% of the samples. The results demonstrate the potential of HSI-NIR associated with chemometric tools to support document forgery identification.

  5. Ancient administrative handwritten documents: X-ray analysis and imaging

    Energy Technology Data Exchange (ETDEWEB)

    Albertin, F., E-mail: fauzia.albertin@epfl.ch [Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Astolfo, A. [Paul Scherrer Institut (PSI), Villigen (Switzerland); Stampanoni, M. [Paul Scherrer Institut (PSI), Villigen (Switzerland); ETHZ, Zürich (Switzerland); Peccenini, Eva [University of Ferrara (Italy); Technopole of Ferrara (Italy); Hwu, Y. [Academia Sinica, Taipei, Taiwan (China); Kaplan, F. [Ecole Polytechnique Fédérale de Lausanne (EPFL) (Switzerland); Margaritondo, G. [Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)

    2015-01-30

    The heavy-element content of ink in ancient administrative documents makes it possible to detect the characters with different synchrotron imaging techniques, based on attenuation or refraction. This is the first step in the direction of non-interactive virtual X-ray reading. Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page ‘reading’. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project.

  6. Fourier transform hyperspectral visible imaging and the nondestructive analysis of potentially fraudulent documents.

    Science.gov (United States)

    Brauns, Eric B; Dyer, R Brian

    2006-08-01

    The work presented in this paper details the design and performance characteristics of a new hyperspectral visible imaging technique. Rather than using optical filters or a dispersing element, this design implements Fourier transform spectroscopy to achieve spectral discrimination. One potentially powerful application of this new technology is the non-destructive analysis and authentication of written and printed documents. Document samples were prepared using red, blue, and black inks. The samples were later altered using a different ink of the same color. While the alterations are undetectable to the naked eye, the alterations involving the blue and black inks were easily detected when the spectrally resolved images were viewed. Analysis of the sample using the red inks was unsuccessful. A 2004 series 20 US dollars bill was imaged to demonstrate the application to document authentication. The results argue that counterfeit detection and quality control during printing are plausible applications of Fourier transform hyperspectral visible imaging. All of the images were subjected to fuzzy c-means cluster analysis in an effort to objectively analyze and automate image analysis. Our results show that cluster analysis can distinguish image features that have remarkably similar visible transmission spectra.

  7. Document image mosaicing: A novel approach

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    image analysis and processing require mosaicing of the split images to obtain a complete final image of the document. Hence, document image mosaicing is the process of merging split images that are obtained by scanning different parts of single large document image with some sort of overlapping region (OR) to produce ...

  8. Comparison of approaches for mobile document image analysis using server supported smartphones

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  9. Imaging and Documenting Gammarideans

    Directory of Open Access Journals (Sweden)

    Carolin Haug

    2011-01-01

    Full Text Available We give an overview of available techniques for imaging and documenting applied to gammarideans and discuss their advantages and disadvantages. Although recent techniques, such as confocal laser scanning microscopy (cLSM, focused ion beam scanning electron microscopy (FIB SEM, or computed microtomography (μCT, provide new possibilities to detect and document structures, these high-tech devices are expensive, and access to them is often limited. Alternatively, there are many possibilities to enhance the capabilities of established techniques such as macrophotography and light microscopy. We discuss improvements of the illumination with polarized light and the possibilities of utilizing the autofluorescence of animals such as the gammarideans. In addition, we present software-based enhancing tools such as image fusion and image stitching.

  10. Robust document image binarization technique for degraded document images.

    Science.gov (United States)

    Su, Bolan; Lu, Shijian; Tan, Chew Lim

    2013-04-01

    Segmentation of text from badly degraded document images is a very challenging task due to the high inter/intra-variation between the document background and the foreground text of different document images. In this paper, we propose a novel document image binarization technique that addresses these issues by using adaptive image contrast. The adaptive image contrast is a combination of the local image contrast and the local image gradient that is tolerant to text and background variation caused by different types of document degradations. In the proposed technique, an adaptive contrast map is first constructed for an input degraded document image. The contrast map is then binarized and combined with Canny's edge map to identify the text stroke edge pixels. The document text is further segmented by a local threshold that is estimated based on the intensities of detected text stroke edge pixels within a local window. The proposed method is simple, robust, and involves minimum parameter tuning. It has been tested on three public datasets that are used in the recent document image binarization contest (DIBCO) 2009 & 2011 and handwritten-DIBCO 2010 and achieves accuracies of 93.5%, 87.8%, and 92.03%, respectively, that are significantly higher than or close to that of the best-performing methods reported in the three contests. Experiments on the Bickley diary dataset that consists of several challenging bad quality document images also show the superior performance of our proposed method, compared with other techniques.

  11. Page Layout Analysis of the Document Image Based on the Region Classification in a Decision Hierarchical Structure

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2010-10-01

    Full Text Available The conversion of document image to its electronic version is a very important problem in the saving, searching and retrieval application in the official automation system. For this purpose, analysis of the document image is necessary. In this paper, a hierarchical classification structure based on a two-stage segmentation algorithm is proposed. In this structure, image is segmented using the proposed two-stage segmentation algorithm. Then, the type of the image regions such as document and non-document image is determined using multiple classifiers in the hierarchical classification structure. The proposed segmentation algorithm uses two algorithms based on wavelet transform and thresholding. Texture features such as correlation, homogeneity and entropy that extracted from co-occurrenc matrix and also two new features based on wavelet transform are used to classifiy and lable the regions of the image. The hierarchical classifier is consisted of two Multilayer Perceptron (MLP classifiers and a Support Vector Machine (SVM classifier. The proposed algorithm is evaluated on a database consisting of document and non-document images that provides from Internet. The experimental results show the efficiency of the proposed approach in the region segmentation and classification. The proposed algorithm provides accuracy rate of 97.5% on classification of the regions.

  12. Benchmarking of Document Image Analysis Tasks for Palm Leaf Manuscripts from Southeast Asia

    Directory of Open Access Journals (Sweden)

    Made Windu Antara Kesiman

    2018-02-01

    Full Text Available This paper presents a comprehensive test of the principal tasks in document image analysis (DIA, starting with binarization, text line segmentation, and isolated character/glyph recognition, and continuing on to word recognition and transliteration for a new and challenging collection of palm leaf manuscripts from Southeast Asia. This research presents and is performed on a complete dataset collection of Southeast Asian palm leaf manuscripts. It contains three different scripts: Khmer script from Cambodia, and Balinese script and Sundanese script from Indonesia. The binarization task is evaluated on many methods up to the latest in some binarization competitions. The seam carving method is evaluated for the text line segmentation task, compared to a recently new text line segmentation method for palm leaf manuscripts. For the isolated character/glyph recognition task, the evaluation is reported from the handcrafted feature extraction method, the neural network with unsupervised learning feature, and the Convolutional Neural Network (CNN based method. Finally, the Recurrent Neural Network-Long Short-Term Memory (RNN-LSTM based method is used to analyze the word recognition and transliteration task for the palm leaf manuscripts. The results from all experiments provide the latest findings and a quantitative benchmark for palm leaf manuscripts analysis for researchers in the DIA community.

  13. Document image retrieval through word shape coding.

    Science.gov (United States)

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  14. Analysis of Design Documentation

    DEFF Research Database (Denmark)

    Hansen, Claus Thorp

    1998-01-01

    has been established where we seek to identify useful design work patterns by retrospective analyses of documentation created during design projects. This paper describes the analysis method, a tentatively defined metric to evaluate identified work patterns, and presents results from the first...... analysis accomplished....

  15. DOCUMENT IMAGE REGISTRATION FOR IMPOSED LAYER EXTRACTION

    Directory of Open Access Journals (Sweden)

    Surabhi Narayan

    2017-02-01

    Full Text Available Extraction of filled-in information from document images in the presence of template poses challenges due to geometrical distortion. Filled-in document image consists of null background, general information foreground and vital information imposed layer. Template document image consists of null background and general information foreground layer. In this paper a novel document image registration technique has been proposed to extract imposed layer from input document image. A convex polygon is constructed around the content of the input and the template image using convex hull. The vertices of the convex polygons of input and template are paired based on minimum Euclidean distance. Each vertex of the input convex polygon is subjected to transformation for the permutable combinations of rotation and scaling. Translation is handled by tight crop. For every transformation of the input vertices, Minimum Hausdorff distance (MHD is computed. Minimum Hausdorff distance identifies the rotation and scaling values by which the input image should be transformed to align it to the template. Since transformation is an estimation process, the components in the input image do not overlay exactly on the components in the template, therefore connected component technique is applied to extract contour boxes at word level to identify partially overlapping components. Geometrical features such as density, area and degree of overlapping are extracted and compared between partially overlapping components to identify and eliminate components common to input image and template image. The residue constitutes imposed layer. Experimental results indicate the efficacy of the proposed model with computational complexity. Experiment has been conducted on variety of filled-in forms, applications and bank cheques. Data sets have been generated as test sets for comparative analysis.

  16. Document image mosaicing: A novel approach

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    ... or copying machines in a single stretch because of their inherent limitations. This results in capture of a large document in terms of split components of a document image. Hence, the need is to mosaic the split components into the original and put together the document image. In this paper, we present a novel and simple ...

  17. Imaging and visual documentation in medicine

    International Nuclear Information System (INIS)

    Wamsteker, K.; Jonas, U.; Veen, G. van der; Waes, P.F.G.M. van

    1987-01-01

    DOCUMED EUROPE '87 was organized to provide information to the physician on the constantly progressing developments in medical imaging technology. Leading specialists lectured on the state-of-the-art of imaging technology and visual documentation in medicine. This book presents a collection of the papers presented at the conference. refs.; figs.; tabs

  18. Stamp Detection in Color Document Images

    DEFF Research Database (Denmark)

    Micenkova, Barbora; van Beusekom, Joost

    2011-01-01

    , moreover, it can be imprinted with a variable quality and rotation. Previous methods were restricted to detection of stamps of particular shapes or colors. The method presented in the paper includes segmentation of the image by color clustering and subsequent classification of candidate solutions...... by geometrical and color-related features. The approach allows for differentiation of stamps from other color objects in the document such as logos or texts. For the purpose of evaluation, a data set of 400 document images has been collected, annotated and made public. With the proposed method, recall of 83...

  19. Indian Language Document Analysis and Understanding

    Indian Academy of Sciences (India)

    character recognition in different Indian languages, pre- and post-processing techniques tai- lored for Indian languages and user-friendly interfaces for better utilisation of the output of document analysis systems, all need attention from Indian scientists working in Image Pro- cessing and Pattern Recognition. It is with this ...

  20. Document analysis with neural net circuits

    Science.gov (United States)

    Graf, Hans Peter

    1994-01-01

    Document analysis is one of the main applications of machine vision today and offers great opportunities for neural net circuits. Despite more and more data processing with computers, the number of paper documents is still increasing rapidly. A fast translation of data from paper into electronic format is needed almost everywhere, and when done manually, this is a time consuming process. Markets range from small scanners for personal use to high-volume document analysis systems, such as address readers for the postal service or check processing systems for banks. A major concern with present systems is the accuracy of the automatic interpretation. Today's algorithms fail miserably when noise is present, when print quality is poor, or when the layout is complex. A common approach to circumvent these problems is to restrict the variations of the documents handled by a system. In our laboratory, we had the best luck with circuits implementing basic functions, such as convolutions, that can be used in many different algorithms. To illustrate the flexibility of this approach, three applications of the NET32K circuit are described in this short viewgraph presentation: locating address blocks, cleaning document images by removing noise, and locating areas of interest in personal checks to improve image compression. Several of the ideas realized in this circuit that were inspired by neural nets, such as analog computation with a low resolution, resulted in a chip that is well suited for real-world document analysis applications and that compares favorably with alternative, 'conventional' circuits.

  1. Analysis of image acquisition, post-processing and documentation in adolescents with spine injuries. Comparison before and after referral to a university hospital

    International Nuclear Information System (INIS)

    Lemburg, S.P.; Roggenland, D.; Nicolas, V.; Heyer, C.M.

    2012-01-01

    Purpose: Systematic evaluation of imaging situation and standards in acute spinal injuries of adolescents. Materials and Methods: Retrospective analysis of imaging studies of transferred adolescents with spinal injuries and survey of transferring hospitals (TH) with respect to the availability of modalities and radiological expertise and post-processing and documentation of CT studies were performed. Repetitions of imaging studies and cumulative effective dose (CED) were noted. Results: 33 of 43 patients (77 %) treated in our hospital (MA 17.2 years, 52 % male) and 25 of 32 TH (78 %) were evaluated. 24-hr availability of conventional radiography and CT was present in 96 % and 92 % of TH, whereas MRI was available in only 36 %. In 64 % of TH, imaging expertise was guaranteed by an on-staff radiologist. During off-hours radiological service was provided on an on-call basis in 56 % of TH. Neuroradiologic and pediatric radiology expertise was not available in 44 % and 60 % of TH, respectively. CT imaging including post-processing and documentation matched our standards in 36 % and 32 % of cases. The repetition rate of CT studies was 39 % (CED 116.08 mSv). Conclusion: With frequent CT repetitions, two-thirds of re-examined patients revealed a different clinical estimation of trauma severity and insufficient CT quality as possible causes for re-examination. A standardization of initial clinical evaluation and CT imaging could possibly reduce the need for repeat examinations. (orig.)

  2. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 21(8):761 –768, Aug. 1999. [84] N. Stamatopoulos, B. Gatos , and T. Georgiou. Automatic...2007. [85] N. Stamatopoulos, B. Gatos , and T. Georgiou. Page frame detection for double page document images. In Proc. of the 9th IAPR Int’l

  3. Performance evaluation methodology for historical document image binarization.

    Science.gov (United States)

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  4. Text segmentation in degraded historical document images

    Directory of Open Access Journals (Sweden)

    A.S. Kavitha

    2016-07-01

    Full Text Available Text segmentation from degraded Historical Indus script images helps Optical Character Recognizer (OCR to achieve good recognition rates for Hindus scripts; however, it is challenging due to complex background in such images. In this paper, we present a new method for segmenting text and non-text in Indus documents based on the fact that text components are less cursive compared to non-text ones. To achieve this, we propose a new combination of Sobel and Laplacian for enhancing degraded low contrast pixels. Then the proposed method generates skeletons for text components in enhanced images to reduce computational burdens, which in turn helps in studying component structures efficiently. We propose to study the cursiveness of components based on branch information to remove false text components. The proposed method introduces the nearest neighbor criterion for grouping components in the same line, which results in clusters. Furthermore, the proposed method classifies these clusters into text and non-text cluster based on characteristics of text components. We evaluate the proposed method on a large dataset containing varieties of images. The results are compared with the existing methods to show that the proposed method is effective in terms of recall and precision.

  5. Efficient document-image super-resolution using convolutional ...

    Indian Academy of Sciences (India)

    Ram Krishna Pandey

    2018-03-06

    6] so that the OCR is able to obtain better recognition accuracy on low-resolution (LR) document images. We have performed experiments and seen that OCR performance on high-resolution (HR) document images obtained ...

  6. Groundtruth Generation and Document Image Degradation

    National Research Council Canada - National Science Library

    Zi, Gang

    2005-01-01

    .... We have developed a system, which uses language support of the MS Windows operating system combined with custom print drivers to render tiff images simultaneously with windows Enhanced Metafile directives...

  7. Image Analysis

    DEFF Research Database (Denmark)

    . The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries......The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development...

  8. Image Documentation in Gastrointestinal Endoscopy: Review of Recommendations.

    Science.gov (United States)

    Marques, Susana; Bispo, Miguel; Pimentel-Nunes, Pedro; Chagas, Cristina; Dinis-Ribeiro, Mário

    2017-11-01

    In recent years, endoscopic image documentation has gained an important role in gastrointestinal (GI) endoscopic reporting and has become an integral aspect of quality control. Since 2001, several important guidelines and statements, some from major endoscopic societies, have been published to standardize endoscopic image documentation. Therefore, and according to the most recent recommendations of the European Society of Gastrointestinal Endoscopy, we propose a set of images to be routinely captured in upper and lower GI endoscopy. Systematic acquisition of 10 and 9 photographs of specific landmarks is recommended in upper-GI endoscopy and colonoscopy, respectively. In addition to photo documentation of the normal endoscopic features, imaging of pathologic findings is also advocated. Considering accurate and adequate image documentation as an essential part of endoscopic reporting, it should be systematically performed in upper and lower GI endoscopy.

  9. IHE cross-enterprise document sharing for imaging: design challenges

    Science.gov (United States)

    Noumeir, Rita

    2006-03-01

    Integrating the Healthcare Enterprise (IHE) has recently published a new integration profile for sharing documents between multiple enterprises. The Cross-Enterprise Document Sharing Integration Profile (XDS) lays the basic framework for deploying regional and national Electronic Health Record (EHR). This profile proposes an architecture based on a central Registry that holds metadata information describing published Documents residing in one or multiple Documents Repositories. As medical images constitute important information of the patient health record, it is logical to extend the XDS Integration Profile to include images. However, including images in the EHR presents many challenges. The complete image set is very large; it is useful for radiologists and other specialists such as surgeons and orthopedists. The imaging report, on the other hand, is widely needed and its broad accessibility is vital for achieving optimal patient care. Moreover, a subset of relevant images may also be of wide interest along with the report. Therefore, IHE recently published a new integration profile for sharing images and imaging reports between multiple enterprises. This new profile, the Cross-Enterprise Document Sharing for Imaging (XDS-I), is based on the XDS architecture. The XDS-I integration solution that is published as part of the IHE Technical Framework is the result of an extensive investigation effort of several design solutions. This paper presents and discusses the design challenges and the rationales behind the design decisions of the IHE XDS-I Integration Profile, for a better understanding and appreciation of the final published solution.

  10. Goal-oriented rectification of camera-based document images.

    Science.gov (United States)

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  11. Efficient document-image super-resolution using convolutional ...

    Indian Academy of Sciences (India)

    Experiments performed by us using optical character recognizers (OCRs) show that the character level accuracy of the OCR reduces significantly with decrease in the spatial resolution of document images. There are real life scenarios, where high-resolution (HR) images are not available, where it is desirable toenhance ...

  12. Method and apparatus for imaging and documenting fingerprints

    Science.gov (United States)

    Fernandez, Salvador M.

    2002-01-01

    The invention relates to a method and apparatus for imaging and documenting fingerprints. A fluorescent dye brought in intimate proximity with the lipid residues of a latent fingerprint is caused to fluoresce on exposure to light energy. The resulting fluorescing image may be recorded photographically.

  13. Digital image watermarking for printed and scanned documents

    Science.gov (United States)

    Thongkor, Kharittha; Amornraksa, Thumrongrat

    2017-07-01

    We present a spatial domain image watermarking for printed and scanned documents. In the watermark embedding process, a watermark image is embedded into the blue color component of a white color image. The result is overlaid with information, and printed out on a piece of paper. In the watermark extraction process, a printed document is first scanned back to obtain an electronic copy. Geometric distortions from the printing and scanning processes are then reduced by an image registration technique based on affine transformation. All watermarked components are used to determine a threshold for watermark bits extraction. The performance of the proposed watermarking method was investigated based on different scanning resolutions, printing quality modes, and printable materials. Watermark extraction from ripped, crumpled, and wet documents was also investigated. The promising results demonstrate the effectiveness of the proposed method.

  14. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan

    2015-04-01

    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  15. Technical document characterization by data analysis

    International Nuclear Information System (INIS)

    Mauget, A.

    1993-05-01

    Nuclear power plants possess documents analyzing all the plant systems, which represents a vast quantity of paper. Analysis of textual data can enable a document to be classified by grouping the texts containing the same words. These methods are used on system manuals for feasibility studies. The system manual is then analyzed by LEXTER and the terms it has selected are examined. We first classify according to style (sentences containing general words, technical sentences, etc.), and then according to terms. However, it will not be possible to continue in this fashion for the 100 system manuals existing, because of lack of sufficient storage capacity. Another solution is being developed. (author)

  16. Analysis of image acquisition, post-processing and documentation in adolescents with spine injuries. Comparison before and after referral to a university hospital; Bildgebung bei wirbelsaeulenverletzten Kindern und jungen Erwachsenen. Eine Analyse von Umfeld, Standards und Wiederholungsuntersuchungen bei Patientenverlegungen

    Energy Technology Data Exchange (ETDEWEB)

    Lemburg, S.P.; Roggenland, D.; Nicolas, V.; Heyer, C.M. [Berufsgenossenschaftliches Universitaetsklinikum Bergmannshell, Bochum (Germany). Inst. fuer Diagnostische Radiologie, Interventionelle Radiologie und Nuklearmedizin

    2012-09-15

    Purpose: Systematic evaluation of imaging situation and standards in acute spinal injuries of adolescents. Materials and Methods: Retrospective analysis of imaging studies of transferred adolescents with spinal injuries and survey of transferring hospitals (TH) with respect to the availability of modalities and radiological expertise and post-processing and documentation of CT studies were performed. Repetitions of imaging studies and cumulative effective dose (CED) were noted. Results: 33 of 43 patients (77 %) treated in our hospital (MA 17.2 years, 52 % male) and 25 of 32 TH (78 %) were evaluated. 24-hr availability of conventional radiography and CT was present in 96 % and 92 % of TH, whereas MRI was available in only 36 %. In 64 % of TH, imaging expertise was guaranteed by an on-staff radiologist. During off-hours radiological service was provided on an on-call basis in 56 % of TH. Neuroradiologic and pediatric radiology expertise was not available in 44 % and 60 % of TH, respectively. CT imaging including post-processing and documentation matched our standards in 36 % and 32 % of cases. The repetition rate of CT studies was 39 % (CED 116.08 mSv). Conclusion: With frequent CT repetitions, two-thirds of re-examined patients revealed a different clinical estimation of trauma severity and insufficient CT quality as possible causes for re-examination. A standardization of initial clinical evaluation and CT imaging could possibly reduce the need for repeat examinations. (orig.)

  17. Document understanding using layout styles of title page images

    Science.gov (United States)

    Sharpe, Louis H., II; Manns, Basil

    1992-08-01

    An important problem in the application of compound document architectures is the input of data from raster images. One technique is to use visual, syntactic cues found in the layout of the raster document to infer its logical structure or semantics. Another is to use context derived from characters recognized within a given block of raster data. Both character- and image- based information are considered here. A well-constrained environment is defined for use in developing rules that can be applied to basic book title page understanding. This paper identifies the attributes of title page layout objects which aid in mapping them into the fields of a simple bibliographic format. Using as input the raster images of the title page and the verso of the title page along with the ASCII output of a generic character recognition engine from these same images, a system of rules is defined for generating a marked-up text wherein key bibliographic fields may be identified.

  18. Convolutional Neural Networks for Page Segmentation of Historical Document Images

    OpenAIRE

    Chen, Kai; Seuret, Mathias

    2017-01-01

    This paper presents a Convolutional Neural Network (CNN) based page segmentation method for handwritten historical document images. We consider page segmentation as a pixel labeling problem, i.e., each pixel is classified as one of the predefined classes. Traditional methods in this area rely on carefully hand-crafted features or large amounts of prior knowledge. In contrast, we propose to learn features from raw image pixels using a CNN. While many researchers focus on developing deep CNN ar...

  19. Oblique aerial images and their use in cultural heritage documentation

    DEFF Research Database (Denmark)

    Höhle, Joachim

    2013-01-01

    Oblique images enable three-dimensional (3d) modelling of objects with vertical dimensions. Such imagery is nowadays systematically taken of cities and may easily become available. The documentation of cultural heritage can take advantage of these sources of information. Two new oblique camera...

  20. Current and emerging standards in document imaging and storage

    Science.gov (United States)

    Baronas, Jean M.

    1992-05-01

    Standards publications being developed by scientists, engineers, and business managers in the Association for Information and Image Management (AIIM) standards committees can be applied to `electronic image management' (EIM) processes including: document image transfer, retrieval and evaluation; optical disk and document scanning; and document design and conversion. When combined with EIM system planning and operations, standards can help generate image databases that are interchangeable among a variety of systems. AIIM is an accredited American National Standards Institute (ANSI) standards developer with more than twenty committees. The committees are comprised of 300 volunteers representing users, vendors, and manufacturers. The standards publications that are developed in these committees have national acceptance. They provide the basis for international harmonization in the development of new International Organization for Standardization (ISO) standards. Until standard implementation parameters are established, the application of different approaches to image management cause uncertainty in EIM system compatibility, calibration, performance, and upward compatibility. The AIIM standards for these applications can be used to decrease the uncertainty, successfully integrate imaging processes, and promote `open systems.' This paper describes AIIM's EIM standards and a new effort at AIIM, a database on standards projects in a wide framework, including image capture, recording, processing, duplication, distribution, display, evaluation, preservation, and media. The AIIM Imagery Database covers imaging standards being developed by many organizations in many different countries. It contains standards publications' dates, origins, related national and international projects, status, keywords, and abstracts. The ANSI Image Technology Standards Board (ITSB) requested that such a database be established, as did the International Standards Organization

  1. Oblique Aerial Images and Their Use in Cultural Heritage Documentation

    Science.gov (United States)

    Höhle, J.

    2013-07-01

    Oblique images enable three-dimensional (3d) modelling of objects with vertical dimensions. Such imagery is nowadays systematically taken of cities and may easily become available. The documentation of cultural heritage can take advantage of these sources of information. Two new oblique camera systems are presented and characteristics of such images are summarized. A first example uses images of a new multi-camera system for the derivation of orthoimages, façade plots with photo texture, 3d scatter plots, and dynamic 3d models of a historic church. The applied methodology is based on automatically derived point clouds of high density. Each point will be supplemented with colour and other attributes. The problems experienced in these processes and the solutions to these problems are presented. The applied tools are a combination of professional tools, free software, and of own software developments. Special attention is given to the quality of input images. Investigations are carried out on edges in the images. The combination of oblique and nadir images enables new possibilities in the processing. The use of the near-infrared channel besides the red, green, and blue channel of the applied multispectral imagery is also of advantage. Vegetation close to the object of interest can easily be removed. A second example describes the modelling of a monument by means of a non-metric camera and a standard software package. The presented results regard achieved geometric accuracy and image quality. It is concluded that the use of oblique aerial images together with image-based processing methods yield new possibilities of economic and accurate documentation of tall monuments.

  2. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  3. Advanced biomedical image analysis

    CERN Document Server

    Haidekker, Mark A

    2010-01-01

    "This book covers the four major areas of image processing: Image enhancement and restoration, image segmentation, image quantification and classification, and image visualization. Image registration, storage, and compression are also covered. The text focuses on recently developed image processing and analysis operators and covers topical research"--Provided by publisher.

  4. AVIS: analysis method for document coherence

    International Nuclear Information System (INIS)

    Henry, J.Y.; Elsensohn, O.

    1994-06-01

    The present document intends to give a short insight into AVIS, a method which permits to verify the quality of technical documents. The paper includes the presentation of the applied approach based on the K.O.D. method, the definition of quality criteria of a technical document, as well as a description of the means of valuating these criteria. (authors). 9 refs., 2 figs

  5. Forensic document analysis using scanning microscopy

    Science.gov (United States)

    Shaffer, Douglas K.

    2009-05-01

    The authentication and identification of the source of a printed document(s) can be important in forensic investigations involving a wide range of fraudulent materials, including counterfeit currency, travel and identity documents, business and personal checks, money orders, prescription labels, travelers checks, medical records, financial documents and threatening correspondence. The physical and chemical characterization of document materials - including paper, writing inks and printed media - is becoming increasingly relevant for law enforcement agencies, with the availability of a wide variety of sophisticated commercial printers and copiers which are capable of producing fraudulent documents of extremely high print quality, rendering these difficult to distinguish from genuine documents. This paper describes various applications and analytical methodologies using scanning electron miscoscopy/energy dispersive (x-ray) spectroscopy (SEM/EDS) and related technologies for the characterization of fraudulent documents, and illustrates how their morphological and chemical profiles can be compared to (1) authenticate and (2) link forensic documents with a common source(s) in their production history.

  6. Rapid Exploitation and Analysis of Documents

    Energy Technology Data Exchange (ETDEWEB)

    Buttler, D J; Andrzejewski, D; Stevens, K D; Anastasiu, D; Gao, B

    2011-11-28

    Analysts are overwhelmed with information. They have large archives of historical data, both structured and unstructured, and continuous streams of relevant messages and documents that they need to match to current tasks, digest, and incorporate into their analysis. The purpose of the READ project is to develop technologies to make it easier to catalog, classify, and locate relevant information. We approached this task from multiple angles. First, we tackle the issue of processing large quantities of information in reasonable time. Second, we provide mechanisms that allow users to customize their queries based on latent topics exposed from corpus statistics. Third, we assist users in organizing query results, adding localized expert structure over results. Forth, we use word sense disambiguation techniques to increase the precision of matching user generated keyword lists with terms and concepts in the corpus. Fifth, we enhance co-occurrence statistics with latent topic attribution, to aid entity relationship discovery. Finally we quantitatively analyze the quality of three popular latent modeling techniques to examine under which circumstances each is useful.

  7. Image processing of false identity documents for forensic intelligence.

    Science.gov (United States)

    Talbot-Wright, Benjamin; Baechler, Simon; Morelato, Marie; Ribaux, Olivier; Roux, Claude

    2016-06-01

    Forensic intelligence has recently gathered increasing attention as a potential expansion of forensic science that may contribute in a wider policing and security context. Whilst the new avenue is certainly promising, relatively few attempts to incorporate models, methods and techniques into practical projects are reported. This work reports a practical application of a generalised and transversal framework for developing forensic intelligence processes referred to here as the Transversal model adapted from previous work. Visual features present in the images of four datasets of false identity documents were systematically profiled and compared using image processing for the detection of a series of modus operandi (M.O.) actions. The nature of these series and their relation to the notion of common source was evaluated with respect to alternative known information and inferences drawn regarding respective crime systems. 439 documents seized by police and border guard authorities across 10 jurisdictions in Switzerland with known and unknown source level links formed the datasets for this study. Training sets were developed based on both known source level data, and visually supported relationships. Performance was evaluated through the use of intra-variability and inter-variability scores drawn from over 48,000 comparisons. The optimised method exhibited significant sensitivity combined with strong specificity and demonstrates its ability to support forensic intelligence efforts. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Canister storage building design basis accident analysis documentation

    International Nuclear Information System (INIS)

    KOPELIC, S.D.

    1999-01-01

    This document provides the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report

  9. Canister Storage Building (CSB) Design Basis Accident Analysis Documentation

    International Nuclear Information System (INIS)

    CROWE, R.D.

    1999-01-01

    This document provides the detailed accident analysis to support ''HNF-3553, Spent Nuclear Fuel Project Final Safety, Analysis Report, Annex A,'' ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report

  10. Canister Storage Building (CSB) Design Basis Accident Analysis Documentation

    International Nuclear Information System (INIS)

    CROWE, R.D.; PIEPHO, M.G.

    2000-01-01

    This document provided the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report''. All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report

  11. Canister Storage Building (CSB) Design Basis Accident Analysis Documentation

    Energy Technology Data Exchange (ETDEWEB)

    CROWE, R.D.; PIEPHO, M.G.

    2000-03-23

    This document provided the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report''. All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.

  12. Canister storage building design basis accident analysis documentation

    Energy Technology Data Exchange (ETDEWEB)

    KOPELIC, S.D.

    1999-02-25

    This document provides the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.

  13. Indian Language Document Analysis and Understanding

    Indian Academy of Sciences (India)

    authoritative manner outlining all the issues involved. The next two papers deal with com- plete systems designed for processing printed text documents in a single language. The paper by Chaudhuri, Pal and Mitra, which is also an invited contribution, describes a system for recognition of printed Oriya script. The paper by ...

  14. Cold Vacuum Drying Facility Design Basis Accident Analysis Documentation

    International Nuclear Information System (INIS)

    PIEPHO, M.G.

    1999-01-01

    This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report, ''Cold Vacuum Drying Facility Final Safety Analysis Report (FSAR).'' All assumptions, parameters and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR

  15. A FRAMEWORK FOR DOCUMENT PRE-PROCESSING IN FORENSIC HANDWRITING ANALYSIS

    NARCIS (Netherlands)

    Franke, K.; Köppen, M.

    2004-01-01

    We propose an open layered framework, which might be adapted to fulfill sophisticated demands in forensic handwriting analysis. Due to the contradicting requirements of processing a huge amount of different document types as well as providing high quality processed images of singular document

  16. Cultural diversity: blind spot in medical curriculum documents, a document analysis.

    Science.gov (United States)

    Paternotte, Emma; Fokkema, Joanne P I; van Loon, Karsten A; van Dulmen, Sandra; Scheele, Fedde

    2014-08-22

    Cultural diversity among patients presents specific challenges to physicians. Therefore, cultural diversity training is needed in medical education. In cases where strategic curriculum documents form the basis of medical training it is expected that the topic of cultural diversity is included in these documents, especially if these have been recently updated. The aim of this study was to assess the current formal status of cultural diversity training in the Netherlands, which is a multi-ethnic country with recently updated medical curriculum documents. In February and March 2013, a document analysis was performed of strategic curriculum documents for undergraduate and postgraduate medical education in the Netherlands. All text phrases that referred to cultural diversity were extracted from these documents. Subsequently, these phrases were sorted into objectives, training methods or evaluation tools to assess how they contributed to adequate curriculum design. Of a total of 52 documents, 33 documents contained phrases with information about cultural diversity training. Cultural diversity aspects were more prominently described in the curriculum documents for undergraduate education than in those for postgraduate education. The most specific information about cultural diversity was found in the blueprint for undergraduate medical education. In the postgraduate curriculum documents, attention to cultural diversity differed among specialties and was mainly superficial. Cultural diversity is an underrepresented topic in the Dutch documents that form the basis for actual medical training, although the documents have been updated recently. Attention to the topic is thus unwarranted. This situation does not fit the demand of a multi-ethnic society for doctors with cultural diversity competences. Multi-ethnic countries should be critical on the content of the bases for their medical educational curricula.

  17. Canister Storage Building (CSB) Design Basis Accident Analysis Documentation

    Energy Technology Data Exchange (ETDEWEB)

    CROWE, R.D.

    1999-09-09

    This document provides the detailed accident analysis to support ''HNF-3553, Spent Nuclear Fuel Project Final Safety, Analysis Report, Annex A,'' ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.

  18. Investigating scientific literacy documents with linguistic network analysis

    DEFF Research Database (Denmark)

    Bruun, Jesper; Evans, Robert Harry; Dolin, Jens

    2009-01-01

    in the one statement sentences. Using the program Pajek, we drew a map of the text showing the number of times a concept appeared in the one statement sentences, and the strength of links between words. Different SL texts was analysed in this way. The network description allowed for different calculations......International discussions of scientific literacy (SL) are extensive and numerous sizeable documents on SL exist. Thus, comparing different conceptions of SL is methodologically challenging. We developed an analytical tool which couples the theory of complex networks with text analysis in order...... to obtain clear visual images of what is meant by SL expressed in written text. The raw text was first parsed into one statement sentences. Then, a linguistic type network was created with nodes being the words used in SL texts, and a link between two words established if they were adjacent to each other...

  19. Unsupervised Word Spotting in Historical Handwritten Document Images using Document-oriented Local Features.

    Science.gov (United States)

    Zagoris, Konstantinos; Pratikakis, Ioannis; Gatos, Basilis

    2017-05-03

    Word spotting strategies employed in historical handwritten documents face many challenges due to variation in the writing style and intense degradation. In this paper, a new method that permits effective word spotting in handwritten documents is presented that it relies upon document-oriented local features which take into account information around representative keypoints as well a matching process that incorporates spatial context in a local proximity search without using any training data. Experimental results on four historical handwritten datasets for two different scenarios (segmentation-based and segmentation-free) using standard evaluation measures show the improved performance achieved by the proposed methodology.

  20. Document Analysis Algorithms and Machine Translation Research

    Science.gov (United States)

    Noel, Jacques

    1975-01-01

    Information science has in recent years developed an algorithmic procedure for content analysis of abstracts which includes setting up a metalanguage for a particular field of knowledge. Such a procedure may aid in solving problems of automatic text analysis in machine translation. (TL)

  1. Planning, Conducting, and Documenting Data Analysis for Program Improvement

    Science.gov (United States)

    Winer, Abby; Taylor, Cornelia; Derrington, Taletha; Lucas, Anne

    2015-01-01

    This 2015 document was developed to help technical assistance (TA) providers and state staff define and limit the scope of data analysis for program improvement efforts, including the State Systemic Improvement Plan (SSIP); develop a plan for data analysis; document alternative hypotheses and additional analyses as they are generated; and…

  2. Multispectral image restoration of historical documents based on LAAMs and mathematical morphology

    Science.gov (United States)

    Lechuga-S., Edwin; Valdiviezo-N., Juan C.; Urcid, Gonzalo

    2014-09-01

    This research introduces an automatic technique designed for the digital restoration of the damaged parts in historical documents. For this purpose an imaging spectrometer is used to acquire a set of images in the wavelength interval from 400 to 1000 nm. Assuming the presence of linearly mixed spectral pixels registered from the multispectral image, our technique uses two lattice autoassociative memories to extract the set of pure pigments conforming a given document. Through an spectral unmixing analysis, our method produces fractional abundance maps indicating the distributions of each pigment in the scene. These maps are then used to locate cracks and holes in the document under study. The restoration process is performed by the application of a region filling algorithm, based on morphological dilation, followed by a color interpolation to restore the original appearance of the filled areas. This procedure has been successfully applied to the analysis and restoration of three multispectral data sets: two corresponding to artificially superimposed scripts and a real data acquired from a Mexican pre-Hispanic codex, whose restoration results are presented.

  3. Spectrum analysis on quality requirements consideration in software design documents.

    Science.gov (United States)

    Kaiya, Haruhiko; Umemura, Masahiro; Ogata, Shinpei; Kaijiri, Kenji

    2013-12-01

    Software quality requirements defined in the requirements analysis stage should be implemented in the final products, such as source codes and system deployment. To guarantee this meta-requirement, quality requirements should be considered in the intermediate stages, such as the design stage or the architectural definition stage. We propose a novel method for checking whether quality requirements are considered in the design stage. In this method, a technique called "spectrum analysis for quality requirements" is applied not only to requirements specifications but also to design documents. The technique enables us to derive the spectrum of a document, and quality requirements considerations in the document are numerically represented in the spectrum. We can thus objectively identify whether the considerations of quality requirements in a requirements document are adapted to its design document. To validate the method, we applied it to commercial software systems with the help of a supporting tool, and we confirmed that the method worked well.

  4. Developing Methods of praxeology to Perform Document-analysis

    DEFF Research Database (Denmark)

    Frederiksen, Jesper

    2016-01-01

    This paper provides a contribution to the methodological development on praxeologic document analysis of neoliberal welfare state policies. Different institutions related to the Danish Healthcare area, transform international health policies and these institutions produce a range of strategies. T...

  5. Efficient document-image super-resolution using convolutional ...

    Indian Academy of Sciences (India)

    Ram Krishna Pandey

    2018-03-06

    . It is observed that our method is faster than natural-image-based SR [10, 17]. The input training image patches are of size 16 В 16. The training dataset is randomly shuffled to ensure that the model does not unnecessarily ...

  6. The role and design of screen images in software documentation.

    NARCIS (Netherlands)

    van der Meij, Hans

    2000-01-01

    Software documentation for the novice user typically must try to achieve at least three goals: to support basic knowledge and skills development; to prevent or support the handling of mistakes, and to support the joint handling of manual, input device and screen. This paper concentrates on the

  7. A Conceptual Model for Multidimensional Analysis of Documents

    Science.gov (United States)

    Ravat, Franck; Teste, Olivier; Tournier, Ronan; Zurlfluh, Gilles

    Data warehousing and OLAP are mainly used for the analysis of transactional data. Nowadays, with the evolution of Internet, and the development of semi-structured data exchange format (such as XML), it is possible to consider entire fragments of data such as documents as analysis sources. As a consequence, an adapted multidimensional analysis framework needs to be provided. In this paper, we introduce an OLAP multidimensional conceptual model without facts. This model is based on the unique concept of dimensions and is adapted for multidimensional document analysis. We also provide a set of manipulation operations.

  8. Digital-image processing and image analysis of glacier ice

    Science.gov (United States)

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  9. Document image recognition and retrieval: where are we?

    Science.gov (United States)

    Garris, Michael D.

    1999-01-01

    This paper discusses survey data collected as a result of planning a project to evaluate document recognition and information retrieval technologies. In the process of establishing the project, a Request for Comment (RFC) was widely distributed throughout the document recognition and information retrieval research and development (R&D) communities, and based on the responses, the project was discontinued. The purpose of this paper is to present `real' data collected from the R&D communities in regards to a `real' project, so that we may all form our own conclusions about where we are, where we are heading, and how we are going to get there. Background on the project is provided and responses to the RFC are summarized.

  10. Technical requirements document for the waste flow analysis

    International Nuclear Information System (INIS)

    Shropshire, D.E.

    1996-05-01

    Purpose of this Technical Requirements Document is to define the top level customer requirements for the Waste Flow Analysis task. These requirements, once agreed upon with DOE, will be used to flow down subsequent development requirements to the model specifications. This document is intended to be a ''living document'' which will be modified over the course of the execution of this work element. Initial concurrence with the technical functional requirements from Environmental Management (EM)-50 is needed before the work plan can be developed

  11. SNF fuel retrieval sub project safety analysis document

    Energy Technology Data Exchange (ETDEWEB)

    BERGMANN, D.W.

    1999-02-24

    This safety analysis is for the SNF Fuel Retrieval (FRS) Sub Project. The FRS equipment will be added to K West and K East Basins to facilitate retrieval, cleaning and repackaging the spent nuclear fuel into Multi-Canister Overpack baskets. The document includes a hazard evaluation, identifies bounding accidents, documents analyses of the accidents and establishes safety class or safety significant equipment to mitigate accidents as needed.

  12. Developing Methods of praxeology to Perform Document-analysis

    DEFF Research Database (Denmark)

    Frederiksen, Jesper

    2016-01-01

    . The affiliations of the different institutional and professional logics affect the strategies. Based on three empirical studies from welfare state documents of Inter-professional collaboration, Coherence in healthcare and Patient-safety by incident report, a summative description on the methodological work......This paper provides a contribution to the methodological development on praxeologic document analysis of neoliberal welfare state policies. Different institutions related to the Danish Healthcare area, transform international health policies and these institutions produce a range of strategies...

  13. Delve: A Data Set Retrieval and Document Analysis System

    KAUST Repository

    Akujuobi, Uchenna Thankgod

    2017-12-29

    Academic search engines (e.g., Google scholar or Microsoft academic) provide a medium for retrieving various information on scholarly documents. However, most of these popular scholarly search engines overlook the area of data set retrieval, which should provide information on relevant data sets used for academic research. Due to the increasing volume of publications, it has become a challenging task to locate suitable data sets on a particular research area for benchmarking or evaluations. We propose Delve, a web-based system for data set retrieval and document analysis. This system is different from other scholarly search engines as it provides a medium for both data set retrieval and real time visual exploration and analysis of data sets and documents.

  14. A Flocking Based algorithm for Document Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Gao, Jinzhu [ORNL; Potok, Thomas E [ORNL

    2006-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.

  15. A short introduction to image analysis - Matlab exercises

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg

    2000-01-01

    This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding.......This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding....

  16. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  17. 32 CFR 775.9 - Documentation and analysis.

    Science.gov (United States)

    2010-07-01

    ... analyses is: (i) From a broad program, plan, or policy environmental impact statement (not necessarily site... tier itself may have a significant impact on the quality of the human environment or when an impact... FOR IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT § 775.9 Documentation and analysis. (a...

  18. Cold Vacuum Drying facility design basis accident analysis documentation

    Energy Technology Data Exchange (ETDEWEB)

    CROWE, R.D.

    2000-08-08

    This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report (FSAR), ''Cold Vacuum Drying Facility Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR. The calculations in this document address the design basis accidents (DBAs) selected for analysis in HNF-3553, ''Spent Nuclear Fuel Project Final Safety Analysis Report'', Annex B, ''Cold Vacuum Drying Facility Final Safety Analysis Report.'' The objective is to determine the quantity of radioactive particulate available for release at any point during processing at the Cold Vacuum Drying Facility (CVDF) and to use that quantity to determine the amount of radioactive material released during the DBAs. The radioactive material released is used to determine dose consequences to receptors at four locations, and the dose consequences are compared with the appropriate evaluation guidelines and release limits to ascertain the need for preventive and mitigative controls.

  19. [Medical museology the exhibition: The history of Rome medical faculty through images and documents].

    Science.gov (United States)

    Serarcangeli, Carla

    2004-01-01

    The Museum and Library of History of Medicine celebrated the 700th anniversary of the foundation of the University of Rome "La Sapienza" with an exhibition of images and documents recalling the history of the medical faculty. Dissecting tools and surgical instruments testify to the long history of anatomical and surgical studies and to the great worth of the teachers at Rome University. Documents, archival papers, books and pictures document the historical inheritance of the Medical School in Rome.

  20. Investigating scientific literacy documents with linguistic network analysis

    DEFF Research Database (Denmark)

    Bruun, Jesper; Evans, Robert Harry; Dolin, Jens

    2009-01-01

    International discussions of scientific literacy (SL) are extensive and numerous sizeable documents on SL exist. Thus, comparing different conceptions of SL is methodologically challenging. We developed an analytical tool which couples the theory of complex networks with text analysis in order...... on the network data. For example a minimal description length approach partitioned the network in to groups of words, which was then seen to represent different visions appearing in the discussion of SL. In short, the networks allow for quantitative analyses as well as a quick visual overview of SL documents....

  1. Documentation and analysis for packaging limited quantity ice chests

    International Nuclear Information System (INIS)

    Nguyen, P.M.

    1995-01-01

    The purpose of this Documentation and Analysis for Packaging (DAP) is to document that ice chests meet the intent of the International Air Transport Association (IATA) and the U.S. Department of Transportation (DOT) Code of Federal Regulations as strong, tight containers for the packaging of limited quantities for transport. This DAP also outlines the packaging method used to protect the sample bottles from breakage. Because the ice chests meet the DOT requirements, they can be used to ship LTD QTY on the Hanford Site

  2. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    In contrast to classical Fourier analysis, time–frequency analysis is concerned with localized Fourier transforms. Gabor analysis is an important branch of time–frequency analysis. Although significantly different, it shares with the wavelet transform methods the ability to describe the smoothness......, it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...

  3. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....

  4. LOCAL BINARIZATION FOR DOCUMENT IMAGES CAPTURED BY CAMERAS WITH DECISION TREE

    Directory of Open Access Journals (Sweden)

    Naser Jawas

    2012-07-01

    Full Text Available Character recognition in a document image captured by a digital camera requires a good binary image as the input for the separation the text from the background. Global binarization method does not provide such good separation because of the problem of uneven levels of lighting in images captured by cameras. Local binarization method overcomes the problem but requires a method to partition the large image into local windows properly. In this paper, we propose a local binariation method with dynamic image partitioning using integral image and decision tree for the binarization decision. The integral image is used to estimate the number of line in the document image. The number of line in the document image is used to devide the document into local windows. The decision tree makes a decision for threshold in every local window. The result shows that the proposed method can separate the text from the background better than using global thresholding with the best OCR result of the binarized image is 99.4%. Pengenalan karakter pada sebuah dokumen citra yang diambil menggunakan kamera digital membutuhkan citra yang terbinerisasi dengan baik untuk memisahkan antara teks dengan background. Metode binarisasi global tidak memberikan hasil pemisahan yang bagus karena permasalahan tingkat pencahayaan yang tidak seimbang pada citra hasil kamera digital. Metode binarisasi lokal dapat mengatasi permasalahan tersebut namun metode tersebut membutuhkan metode untuk membagi citra ke dalam bagian-bagian window lokal. Pada paper ini diusulkan sebuah metode binarisasi lokal dengan pembagian citra secara dinamis menggunakan integral image dan decision tree untuk keputusan binarisasi lokalnya. Integral image digunakan untuk mengestimasi jumlah baris teks dalam dokumen citra. Jumlah baris tersebut kemudian digunakan untuk membagi citra dokumen ke dalam window lokal. Keputusan nilai threshold untuk setiap window lokal ditentukan dengan decisiontree. Hasilnya menunjukkan

  5. Advances in oriental document analysis and recognition techniques

    CERN Document Server

    Lee, Seong-Whan

    1999-01-01

    In recent years, rapid progress has been made in computer processing of oriental languages, and the research developments in this area have resulted in tremendous changes in handwriting processing, printed oriental character recognition, document analysis and recognition, automatic input methodologies for oriental languages, etc. Advances in computer processing of oriental languages can also be seen in multimedia computing and the World Wide Web. Many of the results in those domains are presented in this book.

  6. A document image model and estimation algorithm for optimized JPEG decompression.

    Science.gov (United States)

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya; Fan, Zhigang

    2009-11-01

    The JPEG standard is one of the most prevalent image compression schemes in use today. While JPEG was designed for use with natural images, it is also widely used for the encoding of raster documents. Unfortunately, JPEG's characteristic blocking and ringing artifacts can severely degrade the quality of text and graphics in complex documents. We propose a JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings. The method works by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents. The method works by first segmenting the JPEG encoded document into regions corresponding to background, text, and picture content. The regions corresponding to text and background are then decoded using maximum a posteriori (MAP) estimation. Most importantly, the MAP reconstruction of the text regions uses a model which accounts for the spatial characteristics of text and graphics. Our experimental comparisons to the baseline JPEG decoding as well as to three other decoding schemes, demonstrate that our method substantially improves the quality of decoded images, both visually and as measured by PSNR.

  7. Path Searching Based Crease Detection for Large Scale Scanned Document Images

    Science.gov (United States)

    Zhang, Jifu; Li, Yi; Li, Shutao; Sun, Bin; Sun, Jun

    2017-12-01

    Since the large size documents are usually folded for preservation, creases will occur in the scanned images. In this paper, a crease detection method is proposed to locate the crease pixels for further processing. According to the imaging process of contactless scanners, the shading on both sides of the crease usually varies a lot. Based on this observation, a convex hull based algorithm is adopted to extract the shading information of the scanned image. Then, the possible crease path can be achieved by applying the vertical filter and morphological operations on the shading image. Finally, the accurate crease is detected via Dijkstra path searching. Experimental results on the dataset of real scanned newspapers demonstrate that the proposed method can obtain accurate locations of the creases in the large size document images.

  8. The use of fingerprints available on the web in false identity documents: Analysis from a forensic intelligence perspective.

    Science.gov (United States)

    Girelli, Carlos Magno Alves

    2016-05-01

    Fingerprints present in false identity documents were found on the web. In some cases, laterally reversed (mirrored) images of a same fingerprint were observed in different documents. In the present work, 100 fingerprints images downloaded from the web, as well as their reversals obtained by image editing, were compared between themselves and against the database of the Brazilian Federal Police AFIS, in order to better understand trends about this kind of forgery in Brazil. Some image editing effects were observed in the analyzed fingerprints: addition of artifacts (such as watermarks), image rotation, image stylization, lateral reversal and tonal reversal. Discussion about lateral reversals' detection is presented in this article, as well as suggestion to reduce errors due to missed HIT decisions between reversed fingerprints. The present work aims to highlight the importance of the fingerprints' analysis when performing document examination, especially when only copies of documents are available, something very common in Brazil. Besides the intrinsic features of the fingermarks considered in three levels of details by ACE-V methodology, some visual features of the fingerprints images can be helpful to identify sources of forgeries and modus operandi, such as: limits and image contours, fails in the friction ridges caused by excess or lack of inking and presence of watermarks and artifacts arising from the background. Based on the agreement of such features in fingerprints present in different identity documents and also on the analysis of the time and location where the documents were seized, it is possible to highlight potential links between apparently unconnected crimes. Therefore, fingerprints have potential to reduce linkage blindness and the present work suggests the analysis of fingerprints when profiling false identity documents, as well as the inclusion of fingerprints features in the profile of the documents. Copyright © 2016 Elsevier Ireland Ltd. All

  9. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  10. Document co-citation analysis to enhance transdisciplinary research.

    Science.gov (United States)

    Trujillo, Caleb M; Long, Tammy M

    2018-01-01

    Specialized and emerging fields of research infrequently cross disciplinary boundaries and would benefit from frameworks, methods, and materials informed by other fields. Document co-citation analysis, a method developed by bibliometric research, is demonstrated as a way to help identify key literature for cross-disciplinary ideas. To illustrate the method in a useful context, we mapped peer-recognized scholarship related to systems thinking. In addition, three procedures for validation of co-citation networks are proposed and implemented. This method may be useful for strategically selecting information that can build consilience about ideas and constructs that are relevant across a range of disciplines.

  11. Document co-citation analysis to enhance transdisciplinary research

    Science.gov (United States)

    Trujillo, Caleb M.; Long, Tammy M.

    2018-01-01

    Specialized and emerging fields of research infrequently cross disciplinary boundaries and would benefit from frameworks, methods, and materials informed by other fields. Document co-citation analysis, a method developed by bibliometric research, is demonstrated as a way to help identify key literature for cross-disciplinary ideas. To illustrate the method in a useful context, we mapped peer-recognized scholarship related to systems thinking. In addition, three procedures for validation of co-citation networks are proposed and implemented. This method may be useful for strategically selecting information that can build consilience about ideas and constructs that are relevant across a range of disciplines. PMID:29308433

  12. Every document and picture tells a story: using internal corporate document reviews, semiotics, and content analysis to assess tobacco advertising.

    Science.gov (United States)

    Anderson, S J; Dewhirst, T; Ling, P M

    2006-06-01

    In this article we present communication theory as a conceptual framework for conducting documents research on tobacco advertising strategies, and we discuss two methods for analysing advertisements: semiotics and content analysis. We provide concrete examples of how we have used tobacco industry documents archives and tobacco advertisement collections iteratively in our research to yield a synergistic analysis of these two complementary data sources. Tobacco promotion researchers should consider adopting these theoretical and methodological approaches.

  13. Planning Document for an NBSR Conversion Safety Analysis Report

    Energy Technology Data Exchange (ETDEWEB)

    Diamond D. J.; Baek J.; Hanson, A.L.; Cheng, L-Y.; Brown, N.; Cuadra, A.

    2013-09-25

    The NIST Center for Neutron Research (NCNR) is a reactor-laboratory complex providing the National Institute of Standards and Technology (NIST) and the nation with a world-class facility for the performance of neutron-based research. The heart of this facility is the National Bureau of Standards Reactor (NBSR). The NBSR is a heavy water moderated and cooled reactor operating at 20 MW. It is fueled with high-enriched uranium (HEU) fuel elements. A Global Threat Reduction Initiative (GTRI) program is underway to convert the reactor to low-enriched uranium (LEU) fuel. This program includes the qualification of the proposed fuel, uranium and molybdenum alloy foil clad in an aluminum alloy, and the development of the fabrication techniques. This report is a planning document for the conversion Safety Analysis Report (SAR) that would be submitted to, and approved by, the Nuclear Regulatory Commission (NRC) before the reactor could be converted.This report follows the recommended format and content from the NRC codified in NUREG-1537, “Guidelines for Preparing and Reviewing Applications for the Licensing of Non-power Reactors,” Chapter 18, “Highly Enriched to Low-Enriched Uranium Conversions.” The emphasis herein is on the SAR chapters that require significant changes as a result of conversion, primarily Chapter 4, Reactor Description, and Chapter 13, Safety Analysis. The document provides information on the proposed design for the LEU fuel elements and identifies what information is still missing. This document is intended to assist ongoing fuel development efforts, and to provide a platform for the development of the final conversion SAR. This report contributes directly to the reactor conversion pillar of the GTRI program, but also acts as a boundary condition for the fuel development and fuel fabrication pillars.

  14. Statistical Image Analysis of Longitudinal RAVENS Images

    Directory of Open Access Journals (Sweden)

    Seonjoo eLee

    2015-10-01

    Full Text Available Regional analysis of volumes examined in normalized space (RAVENS are transformation images used in the study of brain morphometry. In this paper, RAVENS images are analyzed using a longitudinal variant of voxel-based morphometry (VBM and longitudinal functional principal component analysis (LFPCA for high-dimensional images. We demonstrate that the latter overcomes the limitations of standard longitudinal VBM analyses, which does not separate registration errors from other longitudinal changes and baseline patterns. This is especially important in contexts where longitudinal changes are only a small fraction of the overall observed variability, which is typical in normal aging and many chronic diseases. Our simulation study shows that LFPCA effectively separates registration error from baseline and longitudinal signals of interest by decomposing RAVENS images measured at multiple visits into three components: a subject-specific imaging random intercept that quantifies the cross-sectional variability, a subject-specific imaging slope that quantifies the irreversible changes over multiple visits, and a subject-visit specific imaging deviation. We describe strategies to identify baseline/longitudinal variation and registration errors combined with covariates of interest. Our analysis suggests that specific regional brain atrophy and ventricular enlargement are associated with multiple sclerosis (MS disease progression.

  15. A framework for biomedical figure segmentation towards image-based document retrieval.

    Science.gov (United States)

    Lopez, Luis D; Yu, Jingyi; Arighi, Cecilia; Tudor, Catalina O; Torii, Manabu; Huang, Hongzhan; Vijay-Shanker, K; Wu, Cathy

    2013-01-01

    The figures included in many of the biomedical publications play an important role in understanding the biological experiments and facts described within. Recent studies have shown that it is possible to integrate the information that is extracted from figures in classical document classification and retrieval tasks in order to improve their accuracy. One important observation about the figures included in biomedical publications is that they are often composed of multiple subfigures or panels, each describing different methodologies or results. The use of these multimodal figures is a common practice in bioscience, as experimental results are graphically validated via multiple methodologies or procedures. Thus, for a better use of multimodal figures in document classification or retrieval tasks, as well as for providing the evidence source for derived assertions, it is important to automatically segment multimodal figures into subfigures and panels. This is a challenging task, however, as different panels can contain similar objects (i.e., barcharts and linecharts) with multiple layouts. Also, certain types of biomedical figures are text-heavy (e.g., DNA sequences and protein sequences images) and they differ from traditional images. As a result, classical image segmentation techniques based on low-level image features, such as edges or color, are not directly applicable to robustly partition multimodal figures into single modal panels. In this paper, we describe a robust solution for automatically identifying and segmenting unimodal panels from a multimodal figure. Our framework starts by robustly harvesting figure-caption pairs from biomedical articles. We base our approach on the observation that the document layout can be used to identify encoded figures and figure boundaries within PDF files. Taking into consideration the document layout allows us to correctly extract figures from the PDF document and associate their corresponding caption. We combine pixel

  16. Document boundary determination using structural and lexical analysis

    Science.gov (United States)

    Taghva, Kazem; Cartright, Marc-Allen

    2009-01-01

    The document boundary determination problem is the process of identifying individual documents in a stack of papers. In this paper, we report on a classification system for automation of this process. The system employs features based on document structure and lexical content. We also report on experimental results to support the effectiveness of this system.

  17. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  18. Simplifying documentation while approaching site closure: integrated health and safety plans as documented safety analysis

    International Nuclear Information System (INIS)

    Brown, Tulanda

    2003-01-01

    At the Fernald Closure Project (FCP) near Cincinnati, Ohio, environmental restoration activities are supported by Documented Safety Analyses (DSAs) that combine the required project-specific Health and Safety Plans, Safety Basis Requirements (SBRs), and Process Requirements (PRs) into single Integrated Health and Safety Plans (I-HASPs). By isolating any remediation activities that deal with Enriched Restricted Materials, the SBRs and PRs assure that the hazard categories of former nuclear facilities undergoing remediation remain less than Nuclear. These integrated DSAs employ Integrated Safety Management methodology in support of simplified restoration and remediation activities that, so far, have resulted in the decontamination and demolition (D and D) of over 150 structures, including six major nuclear production plants. This paper presents the FCP method for maintaining safety basis documentation, using the D and D I-HASP as an example

  19. The Institute of Public Administration's Document Center: From Paper to Electronic Records--A Full Image Government Documents Database.

    Science.gov (United States)

    Al-Zahrani, Rashed S.

    Since its establishment in 1960, the Institute of Public Administration (IPA) in Riyadh, Saudi Arabia has had responsibility for documenting Saudi administrative literature, the official publications of Saudi Arabia, and the literature of regional and international organizations through establishment of the Document Center in 1961. This paper…

  20. Medical image registration for analysis

    International Nuclear Information System (INIS)

    Petrovic, V.

    2006-01-01

    Full text: Image registration techniques represent a rich family of image processing and analysis tools that aim to provide spatial correspondences across sets of medical images of similar and disparate anatomies and modalities. Image registration is a fundamental and usually the first step in medical image analysis and this paper presents a number of advanced techniques as well as demonstrates some of the advanced medical image analysis techniques they make possible. A number of both rigid and non-rigid medical image alignment algorithms of equivalent and merely consistent anatomical structures respectively are presented. The algorithms are compared in terms of their practical aims, inputs, computational complexity and level of operator (e.g. diagnostician) interaction. In particular, the focus of the methods discussion is placed on the applications and practical benefits of medical image registration. Results of medical image registration on a number of different imaging modalities and anatomies are presented demonstrating the accuracy and robustness of their application. Medical image registration is quickly becoming ubiquitous in medical imaging departments with the results of such algorithms increasingly used in complex medical image analysis and diagnostics. This paper aims to demonstrate at least part of the reason why

  1. "Cyt/Nuc," a Customizable and Documenting ImageJ Macro for Evaluation of Protein Distributions Between Cytosol and Nucleus.

    Science.gov (United States)

    Grune, Tilman; Kehm, Richard; Höhn, Annika; Jung, Tobias

    2018-01-10

    Large amounts of data from multi-channel, high resolution, fluorescence microscopic images require tools that provide easy, customizable, and reproducible high-throughput analysis. The freeware "ImageJ" has become one of the standard tools for scientific image analysis. Since ImageJ offers recording of "macros," even a complex multi-step process can be easily applied fully automated to large numbers of images, saving both time and reducing human subjective evaluation. In this work, we present "Cyt/Nuc," an ImageJ macro, able to recognize and to compare the nuclear and cytosolic areas of tissue samples, in order to investigate distributions of immunostained proteins between both compartments, while it documents in detail the whole process of evaluation and pattern recognition. As practical example, the redistribution of the 20S proteasome, the main intracellular protease in mammalian cells, is investigated in NZO-mouse liver after feeding the animals different diets. A significant shift in proteasomal distribution between cytosol and nucleus in response to metabolic stress was revealed using "Cyt/Nuc" via automatized quantification of thousands of nuclei within minutes. "Cyt/Nuc" is easy to use and highly customizable, matches the precision of careful manual evaluation and bears the potential for quick detection of any shift in intracellular protein distribution. © 2018 The Authors. Biotechnology Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  2. Critical discourse analysis of social justice in nursing's foundational documents.

    Science.gov (United States)

    Valderama-Wallace, Claire P

    2017-07-01

    Social inequities threaten the health of the global population. A superficial acknowledgement of social justice by nursing's foundational documents may limit the degree to which nurses view injustice as relevant to nursing practice and education. The purpose was to examine conceptualizations of social justice and connections to broader contexts in the most recent editions. Critical discourse analysis examines and uncovers dynamics related to power, language, and inequality within the American Nurses Association's Code of Ethics, Scope and Standards of Practice, and Social Policy Statement. This analysis found ongoing inconsistencies in conceptualizations of social justice. Although the Code of Ethics integrates concepts related to social justice far more than the other two, tension between professionalism and social change emerges. The discourse of professionalism renders interrelated cultural, social, economic, historical, and political contexts nearly invisible. Greater consistency would provide a clearer path for nurses to mobilize and engage in the courageous work necessary to address social injustice. These findings also call for an examination of how nurses can critique and use the power and privilege of professionalism to amplify the connection between social institutions and health equity in nursing education, practice, and policy development. © 2017 Wiley Periodicals, Inc.

  3. Documented Safety Analysis for the B695 Segment

    International Nuclear Information System (INIS)

    Laycak, D.

    2008-01-01

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., 90 Sr, 137 Cs, or 3 H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems, and keeping

  4. Documented Safety Analysis for the B695 Segment

    Energy Technology Data Exchange (ETDEWEB)

    Laycak, D

    2008-09-11

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building

  5. Integrated computer-aided forensic case analysis, presentation, and documentation based on multimodal 3D data.

    Science.gov (United States)

    Bornik, Alexander; Urschler, Martin; Schmalstieg, Dieter; Bischof, Horst; Krauskopf, Astrid; Schwark, Thorsten; Scheurer, Eva; Yen, Kathrin

    2018-03-23

    Three-dimensional (3D) crime scene documentation using 3D scanners and medical imaging modalities like computed tomography (CT) and magnetic resonance imaging (MRI) are increasingly applied in forensic casework. Together with digital photography, these modalities enable comprehensive and non-invasive recording of forensically relevant information regarding injuries/pathologies inside the body and on its surface. Furthermore, it is possible to capture traces and items at crime scenes. Such digitally secured evidence has the potential to similarly increase case understanding by forensic experts and non-experts in court. Unlike photographs and 3D surface models, images from CT and MRI are not self-explanatory. Their interpretation and understanding requires radiological knowledge. Findings in tomography data must not only be revealed, but should also be jointly studied with all the 2D and 3D data available in order to clarify spatial interrelations and to optimally exploit the data at hand. This is technically challenging due to the heterogeneous data representations including volumetric data, polygonal 3D models, and images. This paper presents a novel computer-aided forensic toolbox providing tools to support the analysis, documentation, annotation, and illustration of forensic cases using heterogeneous digital data. Conjoint visualization of data from different modalities in their native form and efficient tools to visually extract and emphasize findings help experts to reveal unrecognized correlations and thereby enhance their case understanding. Moreover, the 3D case illustrations created for case analysis represent an efficient means to convey the insights gained from case analysis to forensic non-experts involved in court proceedings like jurists and laymen. The capability of the presented approach in the context of case analysis, its potential to speed up legal procedures and to ultimately enhance legal certainty is demonstrated by introducing a number of

  6. ANALYSIS OF FUNDUS IMAGES

    DEFF Research Database (Denmark)

    2000-01-01

    A method classifying objects man image as respective arterial or venous vessels comprising: identifying pixels of the said modified image which are located on a line object, determining which of the said image points is associated with crossing point or a bifurcation of the respective line object......, wherein a crossing point is represented by an image point which is the intersection of four line segments, performing a matching operation on pairs of said line segments for each said crossing point, to determine the path of blood vessels in the image, thereby classifying the line objects in the original...... image into two arbitrary sets, and thereafter designating one of the sets as representing venous structure, the other of the sets as representing arterial structure, depending on one or more of the following criteria: (a) complexity of structure; (b) average density; (c) average width; (d) tortuosity...

  7. Security analysis for biometric data in ID documents

    Science.gov (United States)

    Schimke, Sascha; Kiltz, Stefan; Vielhauer, Claus; Kalker, Ton

    2005-03-01

    In this paper we analyze chances and challenges with respect to the security of using biometrics in ID documents. We identify goals for ID documents, set by national and international authorities, and discuss the degree of security, which is obtainable with the inclusion of biometric into documents like passports. Starting from classical techniques for manual authentication of ID card holders, we expand our view towards automatic methods based on biometrics. We do so by reviewing different human biometric attributes by modality, as well as by discussing possible techniques for storing and handling the particular biometric data on the document. Further, we explore possible vulnerabilities of potential biometric passport systems. Based on the findings of that discussion we will expand upon two exemplary approaches for including digital biometric data in the context of ID documents and present potential risks attack scenarios along with technical aspects such as capacity and robustness.

  8. Documented Safety Analysis for the Waste Storage Facilities March 2010

    Energy Technology Data Exchange (ETDEWEB)

    Laycak, D T

    2010-03-05

    This Documented Safety Analysis (DSA) for the Waste Storage Facilities was developed in accordance with 10 CFR 830, Subpart B, 'Safety Basis Requirements,' and utilizes the methodology outlined in DOE-STD-3009-94, Change Notice 3. The Waste Storage Facilities consist of Area 625 (A625) and the Decontamination and Waste Treatment Facility (DWTF) Storage Area portion of the DWTF complex. These two areas are combined into a single DSA, as their functions as storage for radioactive and hazardous waste are essentially identical. The B695 Segment of DWTF is addressed under a separate DSA. This DSA provides a description of the Waste Storage Facilities and the operations conducted therein; identification of hazards; analyses of the hazards, including inventories, bounding releases, consequences, and conclusions; and programmatic elements that describe the current capacity for safe operations. The mission of the Waste Storage Facilities is to safely handle, store, and treat hazardous waste, transuranic (TRU) waste, low-level waste (LLW), mixed waste, combined waste, nonhazardous industrial waste, and conditionally accepted waste generated at LLNL (as well as small amounts from other DOE facilities).

  9. Documented Safety Analysis for the Waste Storage Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Laycak, D

    2008-06-16

    This documented safety analysis (DSA) for the Waste Storage Facilities was developed in accordance with 10 CFR 830, Subpart B, 'Safety Basis Requirements', and utilizes the methodology outlined in DOE-STD-3009-94, Change Notice 3. The Waste Storage Facilities consist of Area 625 (A625) and the Decontamination and Waste Treatment Facility (DWTF) Storage Area portion of the DWTF complex. These two areas are combined into a single DSA, as their functions as storage for radioactive and hazardous waste are essentially identical. The B695 Segment of DWTF is addressed under a separate DSA. This DSA provides a description of the Waste Storage Facilities and the operations conducted therein; identification of hazards; analyses of the hazards, including inventories, bounding releases, consequences, and conclusions; and programmatic elements that describe the current capacity for safe operations. The mission of the Waste Storage Facilities is to safely handle, store, and treat hazardous waste, transuranic (TRU) waste, low-level waste (LLW), mixed waste, combined waste, nonhazardous industrial waste, and conditionally accepted waste generated at LLNL (as well as small amounts from other DOE facilities).

  10. Correcting geometric and photometric distortion of document images on a smartphone

    Science.gov (United States)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  11. Molecular imaging of banknote and questioned document using solvent-free gold nanoparticle-assisted laser desorption/ionization imaging mass spectrometry.

    Science.gov (United States)

    Tang, Ho-Wai; Wong, Melody Yee-Man; Chan, Sharon Lai-Fung; Che, Chi-Ming; Ng, Kwan-Ming

    2011-01-01

    Direct chemical analysis and molecular imaging of questioned documents in a non/minimal-destructive manner is important in forensic science. Here, we demonstrate that solvent-free gold-nanoparticle-assisted laser desorption/ionization mass spectrometry is a sensitive and minimal destructive method for direct detection and imaging of ink and visible and/or fluorescent dyes printed on banknotes or written on questioned documents. Argon ion sputtering of a gold foil allows homogeneous coating of a thin layer of gold nanoparticles on banknotes and checks in a dry state without delocalizing spatial distributions of the analytes. Upon N(2) laser irradiation of the gold nanoparticle-coated banknotes or checks, abundant ions are desorbed and detected. Recording the spatial distributions of the ions can reveal the molecular images of visible and fluorescent ink printed on banknotes and determine the printing order of different ink which may be useful in differentiating real banknotes from fakes. The method can also be applied to identify forged parts in questioned documents, such as number/writing alteration on a check, by tracing different writing patterns that come from different pens.

  12. Modern methods of documentation for conservation - digital mapping in metigo® MAP, Software for documentation, mapping and quantity survey and analysis

    Science.gov (United States)

    Siedler, Gunnar; Vetter, Sebastian

    2015-04-01

    Several years of experience of heritage documentation have given a background to develop methods of cartography and digital evaluation. The outcome of which is the development of a 2D-mapping software with integrated image rectification over a period of more then 10 years and that became the state of the art software in Germany initially and now elsewhere for Conservation and Restoration projects. If there are no mapping bases (image plan or CAD-drawing), the user can create its own image plans using different types of rectification functions. Based on true to scale mappings, quantity surveys of areas and lines can be calculated automatically. Digital maps were used for the documentation and analysis of materials and damages, for planning of required action and for calculation of costs. With the help of the hierarchy even large mapping projects with many sub projects can be managed. The results of quantification can be exported to excel spreadsheets for further processing. The combination of image processing and CAD-functionality makes operation of the programm user-friendly, both in the office and on-site. metigo MAP was developed in close cooperation with conservators and restorers. Based on simple equipment consisting of digital camera, laser measuring instrument for measuring distances or total station and standard notebook the mapping software is used in many restoration companies.

  13. USE OF IMAGE BASED MODELLING FOR DOCUMENTATION OF INTRICATELY SHAPED OBJECTS

    Directory of Open Access Journals (Sweden)

    M. Marčiš

    2016-06-01

    Full Text Available In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  14. Hyperspectral image analysis. A tutorial

    International Nuclear Information System (INIS)

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  15. Hyperspectral image analysis. A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Amigo, José Manuel, E-mail: jmar@food.ku.dk [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Babamoradi, Hamid [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Elcoroaristizabal, Saioa [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Chemical and Environmental Engineering Department, School of Engineering, University of the Basque Country, Alameda de Urquijo s/n, E-48013 Bilbao (Spain)

    2015-10-08

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  16. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  17. 75 FR 24718 - Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability

    Science.gov (United States)

    2010-05-05

    ...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the... on Documenting Statistical Analysis Programs and Data Files; Availability'' giving interested persons...

  18. Feasibility Study of Low-Cost Image-Based Heritage Documentation in Nepal

    Science.gov (United States)

    Dhonju, H. K.; Xiao, W.; Sarhosis, V.; Mills, J. P.; Wilkinson, S.; Wang, Z.; Thapa, L.; Panday, U. S.

    2017-02-01

    Cultural heritage structural documentation is of great importance in terms of historical preservation, tourism, educational and spiritual values. Cultural heritage across the world, and in Nepal in particular, is at risk from various natural hazards (e.g. earthquakes, flooding, rainfall etc), poor maintenance and preservation, and even human destruction. This paper evaluates the feasibility of low-cost photogrammetric modelling cultural heritage sites, and explores the practicality of using photogrammetry in Nepal. The full pipeline of 3D modelling for heritage documentation and conservation, including visualisation, reconstruction, and structure analysis, is proposed. In addition, crowdsourcing is discussed as a method of data collection of growing prominence.

  19. Document Image Processing: Going beyond the Black-and-White Barrier. Progress, Issues and Options with Greyscale and Colour Image Processing.

    Science.gov (United States)

    Hendley, Tom

    1995-01-01

    Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…

  20. Image processing, analysis, measurement, and quality

    International Nuclear Information System (INIS)

    Hughes, G.W.; Mantey, P.E.; Rogowitz, B.E.

    1988-01-01

    Topics covered in these proceedings include: image aquisition, image processing and analysis, electronic vision, IR imaging, measurement and quality, spatial vision and spatial sampling, and contrast-detail curve measurement and analysis in radiological imaging

  1. Evidence of addiction by anesthesiologists as documented by hair analysis.

    Science.gov (United States)

    Kintz, P; Villain, M; Dumestre, V; Cirimele, V

    2005-10-04

    Chemical dependency is a disease that can affect all professions. Among the health care professionals, anesthesiologists represent a specific group. Numerous factors have been proposed to explain the high incidence of drug abuse among anesthesiologists. These include: easy access to potent drugs, particularly narcotics, highly addictive potential of agents with which they are in contact, and easy diversion of these agents since only small doses will initially provide an effect desired by the abuser. Opioids are the drugs of choice for anesthesiologists, and among them fentanyl and sufentanil are the most commonly used. Alcohol is mostly abused by older anesthesiologists. Propofol, ketamine, thiopental and midazolam are also abused. In fact, all but quaternary ammonium drugs can be observed. Signs and symptoms of addiction in the hospital workplace include: unusual changes in behavior, desire to work alone, refusal of lunch relief or breaks, volunteer for extra cases, call, come in early and leave late, frequent restroom breaks, weight loss and pale skin, malpractice, behind on charts .... Toxicological investigations are difficult, as the drugs of interest are difficult to test for. In most cases, half-lives of the compounds are short, and the circulating concentrations weak. It is, therefore, necessary to develop tandem mass spectrometry procedures to satisfy the criteria of identification and quantitation. In most cases, blood and/or urine analyses are not useful to document impairment, as these specimens are collected at inadequate moments. Hair analysis appears, therefore, as the unique choice to evidence chronic exposure. Depending the length of the hair shaft, it is possible to establish an historical record, associated to the pattern of drug use, considering a growth rate of about 1cm/month. An original procedure was developed to test for fentanyl derivatives. After decontamination with methylene chloride, drugs are extracted from the hair by liquid

  2. Multispectral analysis of multimodal images

    International Nuclear Information System (INIS)

    Kvinnsland, Yngve; Brekke, Njaal; Taxt, Torfinn M.; Gruener, Renate

    2009-01-01

    An increasing number of multimodal images represent a valuable increase in available image information, but at the same time it complicates the extraction of diagnostic information across the images. Multispectral analysis (MSA) has the potential to simplify this problem substantially as unlimited number of images can be combined, and tissue properties across the images can be extracted automatically. Materials and methods. We have developed a software solution for MSA containing two algorithms for unsupervised classification, an EM-algorithm finding multinormal class descriptions and the k-means clustering algorithm, and two for supervised classification, a Bayesian classifier using multinormal class descriptions and a kNN-algorithm. The software has an efficient user interface for the creation and manipulation of class descriptions, and it has proper tools for displaying the results. Results. The software has been tested on different sets of images. One application is to segment cross-sectional images of brain tissue (T1- and T2-weighted MR images) into its main normal tissues and brain tumors. Another interesting set of images are the perfusion maps and diffusion maps, derived images from raw MR images. The software returns segmentation that seem to be sensible. Discussion. The MSA software appears to be a valuable tool for image analysis with multimodal images at hand. It readily gives a segmentation of image volumes that visually seems to be sensible. However, to really learn how to use MSA, it will be necessary to gain more insight into what tissues the different segments contain, and the upcoming work will therefore be focused on examining the tissues through for example histological sections

  3. A Data Analysis of Naval Air Systems Command Funding Documents

    Science.gov (United States)

    2017-06-01

    value in excess of 146 billion dollars. NAVAIR uses the Navy Enterprise Resource Planning ( ERP ) system to process its financial transactions and...resource planning ( ERP ), Naval Air Systems Command (NAVAIR), purchase requests (PR), funding documents, Economy Act Order, intragovernmental transfers...Planning ( ERP ) system to process its financial transactions and, since its implementation, there has been an increase in the overall number of

  4. Combining Linguistic and Spatial Information for Document Analysis

    NARCIS (Netherlands)

    Aiello, Marco; Monz, Christof; Todoran, Leon

    2000-01-01

    We present a framework to analyze color documents of complex layout. In addition, no assumption is made on the layout. Our framework combines in a content-driven bottom-up approach two different sources of information: textual and spatial. To analyze the text, shallow natural language processing

  5. Methods in quantitative image analysis.

    Science.gov (United States)

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value

  6. Flightspeed Integral Image Analysis Toolkit

    Science.gov (United States)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  7. Image formation and image analysis in electron microscopy

    International Nuclear Information System (INIS)

    Heel, M. van.

    1981-01-01

    This thesis covers various aspects of image formation and image analysis in electron microscopy. The imaging of relatively strong objects in partially coherent illumination, the coherence properties of thermionic emission sources and the detection of objects in quantum noise limited images are considered. IMAGIC, a fast, flexible and friendly image analysis software package is described. Intelligent averaging of molecular images is discussed. (C.F.)

  8. The cigarette pack as image: new evidence from tobacco industry documents.

    Science.gov (United States)

    Wakefield, M; Morley, C; Horan, J K; Cummings, K M

    2002-03-01

    To gain an understanding of the role of pack design in tobacco marketing. A search of tobacco company document sites using a list of specified search terms was undertaken during November 2000 to July 2001. Documents show that, especially in the context of tighter restrictions on conventional avenues for tobacco marketing, tobacco companies view cigarette packaging as an integral component of marketing strategy and a vehicle for (a) creating significant in-store presence at the point of purchase, and (b) communicating brand image. Market testing results indicate that such imagery is so strong as to influence smoker's taste ratings of the same cigarettes when packaged differently. Documents also reveal the careful balancing act that companies have employed in using pack design and colour to communicate the impression of lower tar or milder cigarettes, while preserving perceived taste and "satisfaction". Systematic and extensive research is carried out by tobacco companies to ensure that cigarette packaging appeals to selected target groups, including young adults and women. Cigarette pack design is an important communication device for cigarette brands and acts as an advertising medium. Many smokers are misled by pack design into thinking that cigarettes may be "safer". There is a need to consider regulation of cigarette packaging.

  9. Creating & using specimen images for collection documentation, research, teaching and outreach

    Science.gov (United States)

    Demouthe, J. F.

    2012-12-01

    In this age of digital media, there are many opportunities for use of good images of specimens. On-line resources such as institutional web sites and global sites such as PaleoNet and the Paleobiology Database provide venues for collection information and images. Pictures can also be made available to the general public through popular media sites such as Flickr and Facebook, where they can be retrieved and used by teachers, students, and the general public. The number of requests for specimen loans can be drastically reduced by offering the scientific community access to data and specimen images using the internet. This is an important consideration in these days of limited support budgets, since it reduces the amount of staff time necessary for giving researchers and educators access to collections. It also saves wear and tear on the specimens themselves. Many institutions now limit or refuse to send specimens out of their own countries because of the risks involved in going through security and customs. The internet can bridge political boundaries, allowing everyone equal access to collections. In order to develop photographic documentation of a collection, thoughtful preparation will make the process easier and more efficient. Acquire the necessary equipment, establish standards for images, and develop a simple workflow design. Manage images in the camera, and produce the best possible results, rather than relying on time-consuming editing after the fact. It is extremely important that the images of each specimen be of the highest quality and resolution. Poor quality, low resolution photos are not good for anything, and will often have to be retaken when another need arises. Repeating the photography process involves more handling of specimens and more staff time. Once good photos exist, smaller versions can be created for use on the web. The originals can be archived and used for publication and other purposes.

  10. 3D Documentation of Archaeological Excavations Using Image-Based Point Cloud

    Directory of Open Access Journals (Sweden)

    Umut Ovalı

    2017-03-01

    Full Text Available Rapid progress in digital technology enables us to create three-dimensional models using digital images. Low cost, time efficiency and accurate results of this method put to question if this technique can be an alternative to conventional documentation techniques, which generally are 2D orthogonal drawings. Accurate and detailed 3D models of archaeological features have potential for many other purposes besides geometric documentation. This study presents a recent image-based three-dimensional registration technique employed in 2013 at one of the ancient city in Turkey, using “Structure from Motion” (SfM algorithms. A commercial software is applied to investigate whether this method can be used as an alternative to other techniques. Mesh model of the some section of the excavation section of the site were produced using point clouds were produced from the digital photographs. Accuracy assessment of the produced model was realized using the comparison of the directly measured coordinates of the ground control points with produced from model. Obtained results presented that the accuracy is around 1.3 cm.

  11. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim...... of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  12. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    2011-01-01

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code......This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim...

  13. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  14. Automatic segmentation of subfigure image panels for multimodal biomedical document retrieval

    Science.gov (United States)

    Cheng, Beibei; Antani, Sameer; Stanley, R. Joe; Thoma, George R.

    2011-01-01

    Biomedical images are often referenced for clinical decision support (CDS), educational purposes, and research. The task of automatically finding the images in a scientific article that are most useful for the purpose of determining relevance to a clinical situation is traditionally done using text and is quite challenging. We propose to improve this by associating image features from the entire image and from relevant regions of interest with biomedical concepts described in the figure caption or discussion in the article. However, images used in scientific article figures are often composed of multiple panels where each sub-figure (panel) is referenced in the caption using alphanumeric labels, e.g. Figure 1(a), 2(c), etc. It is necessary to separate individual panels from a multi-panel figure as a first step toward automatic annotation of images. In this work we present methods that add make robust our previous efforts reported here. Specifically, we address the limitation in segmenting figures that do not exhibit explicit inter-panel boundaries, e.g. illustrations, graphs, and charts. We present a novel hybrid clustering algorithm based on particle swarm optimization (PSO) with fuzzy logic controller (FLC) to locate related figure components in such images. Results from our evaluation are very promising with 93.64% panel detection accuracy for regular (non-illustration) figure images and 92.1% accuracy for illustration images. A computational complexity analysis also shows that PSO is an optimal approach with relatively low computation time. The accuracy of separating these two type images is 98.11% and is achieved using decision tree.

  15. Artificial intelligence and medical imaging. Expert systems and image analysis

    International Nuclear Information System (INIS)

    Wackenheim, A.; Zoellner, G.; Horviller, S.; Jacqmain, T.

    1987-01-01

    This paper gives an overview on the existing systems for automated image analysis and interpretation in medical imaging, especially in radiology. The example of ORFEVRE, the system for the analysis of CAT-scan images of the cervical triplet (c3-c5) by image analysis and subsequent expert-system is given and discussed in detail. Possible extensions are described [fr

  16. DNA Analysis and Document Examination: The Impact of Each Technique on Respective Analyses.

    Science.gov (United States)

    Parsons, Lauren; Sharfe, Gordon; Vintiner, Sue

    2016-01-01

    Threatening letters, counterfeit documents, and anonymous notes can commonly be encountered in criminal situations. Such handwritten documents may encourage DNA to transfer from the writer's hands and lower arms when these areas come into contact with the document. As any DNA transferred is likely to be at a low level, sensitive low copy number (LCN) DNA analysis can be employed for testing document exhibits. In this study, we determine locations on the document that are most commonly touched during writing and handling and compare DNA recovery from these sites. We describe the impact of DNA sampling on subsequent document examination techniques including the ESDA(®) and likewise the effect of the ESDA(®) and two other document examination techniques on subsequent DNA analysis. The findings from this study suggest that DNA results can be obtained through targeted sampling of document evidence, but that care is required when ordering these examination strategies. © 2015 American Academy of Forensic Sciences.

  17. Promotion of physical activity in the European region: content analysis of 27 national policy documents

    DEFF Research Database (Denmark)

    Daugbjerg, Signe B; Kahlmeier, Sonja; Racioppi, Francesca

    2009-01-01

    search methods, 49 national policy documents on physical activity promotion were identified. An analysis grid covering key features was developed for the analysis of the 27 documents published in English. RESULTS: Analysis showed that many general recommendations for policy developments are being...... a noticeable development of national policy documents on physical activity promotion. Following principles for policy development more closely could increase the effectiveness of their preparation and implementation further....

  18. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  19. Image analysis for DNA sequencing

    International Nuclear Information System (INIS)

    Palaniappan, K.; Huang, T.S.

    1991-01-01

    This paper reports that there is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information

  20. Wide-field time-resolved luminescence imaging and spectroscopy to decipher obliterated documents in forensic science

    Science.gov (United States)

    Suzuki, Mototsugu; Akiba, Norimitsu; Kurosawa, Kenji; Kuroki, Kenro; Akao, Yoshinori; Higashikawa, Yoshiyasu

    2016-01-01

    We applied a wide-field time-resolved luminescence (TRL) method with a pulsed laser and a gated intensified charge coupled device (ICCD) for deciphering obliterated documents for use in forensic science. The TRL method can nondestructively measure the dynamics of luminescence, including fluorescence and phosphorescence lifetimes, which prove to be useful parameters for image detection. First, we measured the TRL spectra of four brands of black porous-tip pen inks on paper to estimate their luminescence lifetimes. Next, we acquired the TRL images of 12 obliterated documents at various delay times and gate times of the ICCD. The obliterated contents were revealed in the TRL images because of the difference in the luminescence lifetimes of the inks. This method requires no pretreatment, is nondestructive, and has the advantage of wide-field imaging, which makes it is easy to control the gate timing. This demonstration proves that TRL imaging and spectroscopy are powerful tools for forensic document examination.

  1. Image/Time Series Mining Algorithms: Applications to Developmental Biology, Document Processing and Data Streams

    Science.gov (United States)

    Tataw, Oben Moses

    2013-01-01

    Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…

  2. Documentation of SPECTROM-55: A finite element thermohydrogeological analysis program

    International Nuclear Information System (INIS)

    Osnes, J.D.; Ratigan, J.L.; Loken, M.C.; Parrish, D.K.

    1985-12-01

    SPECTROM-55 is a finite element computer program developed by RE/SPEC Inc. for analyses of coupled heat and fluid transfer through fully saturated porous media. The theoretical basis for the mathematical model, the implementation of the mathematical model into the computer code, the verification and validation efforts with the computer code, and the code support and continuing documentation are described in this document. The program is especially suited for analyses of the regional hydrogeology in the vicinity of a heat-generating nuclear waste repository. These applications typically involve forced and free convection in a ground-water flow system. The program provides transient or steady-state temperatures, pressures, and fluid velocities resulting from the application of a variety of initial and boundary conditions to bodies with complicated shapes. The boundary conditions include constant heat and fluid fluxes, convective heat transfer, constant temperature, and constant pressure. Initial temperatures and pressures can be specified. Composite systems of anisotropic materials, such as geologic strata, can be defined in either planar or axisymmetric configurations. Constant or varying volumetric heat generation, such as decaying heat generation from radioactive waste, can be specified

  3. Documentation of SPECTROM-55: A finite element thermohydrogeological analysis program

    International Nuclear Information System (INIS)

    Osnes, J.D.; Ratigan, J.L.; Loken, M.C.; Parrish, D.K.

    1989-01-01

    SPECTROM-55 is a finite element computer program for analyses of coupled heat and fluid transfer through fully saturated porous media. The code is part of the SPECTROM (Special Purpose Engineering Codes for Thermal/ROck Mechanics) series of special purpose finite element programs, that address the many unique rock mechanics problems resulting from storage of radioactive waste in geologic formations. This document presents the theoretical basis for the mathematical model, the finite element formulation of the problem, and a description of the input data for the program along with details about program support and continuing documentation. The program is especially suited for analyses of the regional hydrogeology in the vicinity of a heat-generating nuclear waste repository. These applications typically involved forced and free convection in a ground-water flow system. The program provides transient or steady-state temperatures, pressures, and fluid velocities resulting from the application of a variety of initial and boundary conditions to bodies with complicated shapes. The boundary conditions include constant heat and fluid fluxes, convective heat transfer, constant temperature, and constant pressure. Initial temperatures and pressures can be specified. Composite systems of anisotropic materials, such as geologic strata, can be defined in either planar or axisymmetric configurations. Constant or varying volumetric heat generation, such as decaying heat generation from radioactive waste, can be specified

  4. Comparative Analysis of Document level Text Classification Algorithms using R

    Science.gov (United States)

    Syamala, Maganti; Nalini, N. J., Dr; Maguluri, Lakshamanaphaneendra; Ragupathy, R., Dr.

    2017-08-01

    From the past few decades there has been tremendous volumes of data available in Internet either in structured or unstructured form. Also, there is an exponential growth of information on Internet, so there is an emergent need of text classifiers. Text mining is an interdisciplinary field which draws attention on information retrieval, data mining, machine learning, statistics and computational linguistics. And to handle this situation, a wide range of supervised learning algorithms has been introduced. Among all these K-Nearest Neighbor(KNN) is efficient and simplest classifier in text classification family. But KNN suffers from imbalanced class distribution and noisy term features. So, to cope up with this challenge we use document based centroid dimensionality reduction(CentroidDR) using R Programming. By combining these two text classification techniques, KNN and Centroid classifiers, we propose a scalable and effective flat classifier, called MCenKNN which works well substantially better than CenKNN.

  5. Multispectral Imaging Broadens Cellular Analysis

    Science.gov (United States)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  6. K West integrated water treatment system subproject safety analysis document

    International Nuclear Information System (INIS)

    SEMMENS, L.S.

    1999-01-01

    This Accident Analysis evaluates unmitigated accident scenarios, and identifies Safety Significant and Safety Class structures, systems, and components for the K West Integrated Water Treatment System

  7. K West integrated water treatment system subproject safety analysis document

    Energy Technology Data Exchange (ETDEWEB)

    SEMMENS, L.S.

    1999-02-24

    This Accident Analysis evaluates unmitigated accident scenarios, and identifies Safety Significant and Safety Class structures, systems, and components for the K West Integrated Water Treatment System.

  8. Moderate Resolution Imaging Spectroradiometer (MODIS) MOD21 Land Surface Temperature and Emissivity Algorithm Theoretical Basis Document

    Science.gov (United States)

    Hulley, G.; Malakar, N.; Hughes, T.; Islam, T.; Hook, S.

    2016-01-01

    This document outlines the theory and methodology for generating the Moderate Resolution Imaging Spectroradiometer (MODIS) Level-2 daily daytime and nighttime 1-km land surface temperature (LST) and emissivity product using the Temperature Emissivity Separation (TES) algorithm. The MODIS-TES (MOD21_L2) product, will include the LST and emissivity for three MODIS thermal infrared (TIR) bands 29, 31, and 32, and will be generated for data from the NASA-EOS AM and PM platforms. This is version 1.0 of the ATBD and the goal is maintain a 'living' version of this document with changes made when necessary. The current standard baseline MODIS LST products (MOD11*) are derived from the generalized split-window (SW) algorithm (Wan and Dozier 1996), which produces a 1-km LST product and two classification-based emissivities for bands 31 and 32; and a physics-based day/night algorithm (Wan and Li 1997), which produces a 5-km (C4) and 6-km (C5) LST product and emissivity for seven MODIS bands: 20, 22, 23, 29, 31-33.

  9. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    Science.gov (United States)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  10. IHE cross-enterprise document sharing for imaging: interoperability testing software.

    Science.gov (United States)

    Noumeir, Rita; Renaud, Bérubé

    2010-09-21

    With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  11. IHE cross-enterprise document sharing for imaging: interoperability testing software

    Directory of Open Access Journals (Sweden)

    Renaud Bérubé

    2010-09-01

    Full Text Available Abstract Background With the deployments of Electronic Health Records (EHR, interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  12. Communications data delivery system analysis : public workshop read-ahead document.

    Science.gov (United States)

    2012-04-09

    This document presents an overview of work conducted to date around development and analysis of communications data delivery systems for : supporting transactions in the connected vehicle environment. It presents the results of technical analysis of ...

  13. Signal and image multiresolution analysis

    CERN Document Server

    Ouahabi, Abdelialil

    2012-01-01

    Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing

  14. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  15. Corporate Social Responsibility programs of Big Food in Australia: a content analysis of industry documents.

    Science.gov (United States)

    Richards, Zoe; Thomas, Samantha L; Randle, Melanie; Pettigrew, Simone

    2015-12-01

    To examine Corporate Social Responsibility (CSR) tactics by identifying the key characteristics of CSR strategies as described in the corporate documents of selected 'Big Food' companies. A mixed methods content analysis was used to analyse the information contained on Australian Big Food company websites. Data sources included company CSR reports and web-based content that related to CSR initiatives employed in Australia. A total of 256 CSR activities were identified across six organisations. Of these, the majority related to the categories of environment (30.5%), responsibility to consumers (25.0%) or community (19.5%). Big Food companies appear to be using CSR activities to: 1) build brand image through initiatives associated with the environment and responsibility to consumers; 2) target parents and children through community activities; and 3) align themselves with respected organisations and events in an effort to transfer their positive image attributes to their own brands. Results highlight the type of CSR strategies Big Food companies are employing. These findings serve as a guide to mapping and monitoring CSR as a specific form of marketing. © 2015 Public Health Association of Australia.

  16. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  17. UV imaging in pharmaceutical analysis

    DEFF Research Database (Denmark)

    Østergaard, Jesper

    2018-01-01

    UV imaging provides spatially and temporally resolved absorbance measurements, which are highly useful in pharmaceutical analysis. Commercial UV imaging instrumentation was originally developed as a detector for separation sciences, but the main use is in the area of in vitro dissolution...... and release testing studies. The review covers the basic principles of the technology and summarizes the main applications in relation to intrinsic dissolution rate determination, excipient compatibility studies and in vitro release characterization of drug substances and vehicles intended for parenteral...... administration. UV imaging has potential for providing new insights to drug dissolution and release processes in formulation development by real-time monitoring of swelling, precipitation, diffusion and partitioning phenomena. Limitations of current instrumentation are discussed and a perspective to new...

  18. Autoradiography and automated image analysis

    International Nuclear Information System (INIS)

    Vardy, P.H.; Willard, A.G.

    1982-01-01

    Limitations with automated image analysis and the solution of problems encountered are discussed. With transmitted light, unstained plastic sections with planar profiles should be used. Stains potentiate signal so that television registers grains as falsely larger areas of low light intensity. Unfocussed grains in paraffin sections will not be seen by image analysers due to change in darkness and size. With incident illumination, the use of crossed polars, oil objectives and an oil filled light trap continuous with the base of the slide will reduce glare. However this procedure so enormously attenuates the light reflected by silver grains, that detection may be impossible. Autoradiographs should then be photographed and the negative images of silver grains on film analysed automatically using transmitted light

  19. Underground Test Area Subproject Phase I Data Analysis Task. Volume VIII - Risk Assessment Documentation Package

    Energy Technology Data Exchange (ETDEWEB)

    None

    1996-12-01

    Volume VIII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the risk assessment documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  20. Underground Test Area Subproject Phase I Data Analysis Task. Volume VII - Tritium Transport Model Documentation Package

    Energy Technology Data Exchange (ETDEWEB)

    None

    1996-12-01

    Volume VII of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the tritium transport model documentation. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.

  1. A Document Analysis of Teacher Evaluation Systems Specific to Physical Education

    Science.gov (United States)

    Norris, Jason M.; van der Mars, Hans; Kulinna, Pamela; Kwon, Jayoun; Amrein-Beardsley, Audrey

    2017-01-01

    Purpose: The purpose of this document analysis study was to examine current teacher evaluation systems, understand current practices, and determine whether the instrumentation is a valid measure of teaching quality as reflected in teacher behavior and effectiveness specific to physical education (PE). Method: An interpretive document analysis…

  2. CoIN: a network analysis for document triage.

    Science.gov (United States)

    Hsu, Yi-Yu; Kao, Hung-Yu

    2013-01-01

    In recent years, there was a rapid increase in the number of medical articles. The number of articles in PubMed has increased exponentially. Thus, the workload for biocurators has also increased exponentially. Under these circumstances, a system that can automatically determine in advance which article has a higher priority for curation can effectively reduce the workload of biocurators. Determining how to effectively find the articles required by biocurators has become an important task. In the triage task of BioCreative 2012, we proposed the Co-occurrence Interaction Nexus (CoIN) for learning and exploring relations in articles. We constructed a co-occurrence analysis system, which is applicable to PubMed articles and suitable for gene, chemical and disease queries. CoIN uses co-occurrence features and their network centralities to assess the influence of curatable articles from the Comparative Toxicogenomics Database. The experimental results show that our network-based approach combined with co-occurrence features can effectively classify curatable and non-curatable articles. CoIN also allows biocurators to survey the ranking lists for specific queries without reviewing meaningless information. At BioCreative 2012, CoIN achieved a 0.778 mean average precision in the triage task, thus finishing in second place out of all participants. Database URL: http://ikmbio.csie.ncku.edu.tw/coin/home.php.

  3. Model-based document categorization employing semantic pattern analysis and local structure clustering

    Science.gov (United States)

    Fume, Kosei; Ishitani, Yasuto

    2008-01-01

    We propose a document categorization method based on a document model that can be defined externally for each task and that categorizes Web content or business documents into a target category in accordance with the similarity of the model. The main feature of the proposed method consists of two aspects of semantics extraction from an input document. The semantics of terms are extracted by the semantic pattern analysis and implicit meanings of document substructure are specified by a bottom-up text clustering technique focusing on the similarity of text line attributes. We have constructed a system based on the proposed method for trial purposes. The experimental results show that the system achieves more than 80% classification accuracy in categorizing Web content and business documents into 15 or 70 categories.

  4. [Video documentation in forensic practice].

    Science.gov (United States)

    Schyma, C; Schyma, P

    1995-01-01

    The authors report in part 1 about their experiences with the Canon Ex1 Hi camcorder and the possibilities of documentation with the modern video technique. Application examples in legal medicine and criminalistics are described autopsy, scene, reconstruction of crimes etc. The online video documentation of microscopic sessions makes the discussion of findings easier. The use of video films for instruction produces a good resonance. The use of the video documentation can be extended by digitizing (Part 2). Two frame grabbers are presented, with which we obtained good results in digitizing of images captured from video. The best quality of images is achieved by online use of an image analysis chain. Corel 5.0 and PicEd Cora 4.0 allow complete image processings and analysis. The digital image processing influences the objectivity of the documentation. The applicabilities of image libraries are discussed.

  5. Remote Sensing Digital Image Analysis

    Science.gov (United States)

    Richards, John A.; Jia, Xiuping

    Remote Sensing Digital Image Analysis provides the non-specialist with an introduction to quantitative evaluation of satellite and aircraft derived remotely retrieved data. Each chapter covers the pros and cons of digital remotely sensed data, without detailed mathematical treatment of computer based algorithms, but in a manner conductive to an understanding of their capabilities and limitations. Problems conclude each chapter. This fourth edition has been developed to reflect the changes that have occurred in this area over the past several years.

  6. Quantitative image analysis of synovial tissue

    NARCIS (Netherlands)

    van der Hall, Pascal O.; Kraan, Maarten C.; Tak, Paul Peter

    2007-01-01

    Quantitative image analysis is a form of imaging that includes microscopic histological quantification, video microscopy, image analysis, and image processing. Hallmarks are the generation of reliable, reproducible, and efficient measurements via strict calibration and step-by-step control of the

  7. Standardized cine-loop documentation in abdominal ultrasound facilitates offline image interpretation.

    Science.gov (United States)

    Dormagen, Johann Baptist; Gaarder, Mario; Drolsum, Anders

    2015-01-01

    One of the main disadvantages of conventional ultrasound is its operator dependency, which might impede the reproducibility of the sonographic findings. A new approach with cine-loops and standardized scan protocols can overcome this drawback. To compare abdominal ultrasound findings of immediate bedside reading by performing radiologist with offline reading by a non-performing radiologist, using standardized cine-loop sequences. Over a 6-month period, three radiologists performed 140 dynamic ultrasound organ-based examinations in 43 consecutive outpatients. Examination protocols were standardized and included predefined probe position and sequences of short cine-loops of the liver, gallbladder, pancreas, kidneys, and urine bladder, covering the organs completely in two planes. After bedside examinations, the studies were reviewed and read out immediately by the performing radiologist. Image quality was registered from 1 (no diagnostic value) to 5 (excellent cine-loop quality). Offline reading was performed blinded by a radiologist who had not performed the examination. Bedside and offline reading were compared with each other and with consensus results. In 140 examinations, consensus reading revealed 21 cases with renal disorders, 17 cases with liver and bile pathology, and four cases with bladder pathology. Overall inter-observer agreement was 0.73 (95% CI 0.61-0.91), with lowest agreement for findings of the urine bladder (0.36) and highest agreement in liver examinations (0.90). Disagreements between the two readings were seen in nine kidneys, three bladder examinations, one pancreas and bile system examinations each, and in one liver, giving a total number of mismatches of 11%. Nearly all cases of mismatch were of minor clinical significance. The median image quality was 3 (range, 2-5) with most examinations deemed a quality of 3. Compared to consensus reading, overall accuracy was 96% for bedside reading and 94% for offline reading. Standardized cine

  8. Organ donation in the ICU: A document analysis of institutional policies, protocols, and order sets.

    Science.gov (United States)

    Oczkowski, Simon J W; Centofanti, John E; Durepos, Pamela; Arseneau, Erika; Kelecevic, Julija; Cook, Deborah J; Meade, Maureen O

    2018-04-01

    To better understand how local policies influence organ donation rates. We conducted a document analysis of our ICU organ donation policies, protocols and order sets. We used a systematic search of our institution's policy library to identify documents related to organ donation. We used Mindnode software to create a publication timeline, basic statistics to describe document characteristics, and qualitative content analysis to extract document themes. Documents were retrieved from Hamilton Health Sciences, an academic hospital system with a high volume of organ donation, from database inception to October 2015. We retrieved 12 active organ donation documents, including six protocols, two policies, two order sets, and two unclassified documents, a majority (75%) after the introduction of donation after circulatory death in 2006. Four major themes emerged: organ donation process, quality of care, patient and family-centred care, and the role of the institution. These themes indicate areas where documented institutional standards may be beneficial. Further research is necessary to determine the relationship of local policies, protocols, and order sets to actual organ donation practices, and to identify barriers and facilitators to improving donation rates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  10. Language Ideology or Language Practice? An Analysis of Language Policy Documents at Swedish Universities

    Science.gov (United States)

    Björkman, Beyza

    2014-01-01

    This article presents an analysis and interpretation of language policy documents from eight Swedish universities with regard to intertextuality, authorship and content analysis of the notions of language practices and English as a lingua franca (ELF). The analysis is then linked to Spolsky's framework of language policy, namely language…

  11. Public health human resources: a comparative analysis of policy documents in two Canadian provinces

    Science.gov (United States)

    2014-01-01

    Background Amidst concerns regarding the capacity of the public health system to respond rapidly and appropriately to threats such as pandemics and terrorism, along with changing population health needs, governments have focused on strengthening public health systems. A key factor in a robust public health system is its workforce. As part of a nationally funded study of public health renewal in Canada, a policy analysis was conducted to compare public health human resources-relevant documents in two Canadian provinces, British Columbia (BC) and Ontario (ON), as they each implement public health renewal activities. Methods A content analysis of policy and planning documents from government and public health-related organizations was conducted by a research team comprised of academics and government decision-makers. Documents published between 2003 and 2011 were accessed (BC = 27; ON = 20); documents were either publicly available or internal to government and excerpted with permission. Documentary texts were deductively coded using a coding template developed by the researchers based on key health human resources concepts derived from two national policy documents. Results Documents in both provinces highlighted the importance of public health human resources planning and policies; this was particularly evident in early post-SARS documents. Key thematic areas of public health human resources identified were: education, training, and competencies; capacity; supply; intersectoral collaboration; leadership; public health planning context; and priority populations. Policy documents in both provinces discussed the importance of an educated, competent public health workforce with the appropriate skills and competencies for the effective and efficient delivery of public health services. Conclusion This policy analysis identified progressive work on public health human resources policy and planning with early documents providing an inventory of issues to be

  12. Analysis of a risk prevention document using dependability techniques: a first step towards an effectiveness model

    Science.gov (United States)

    Ferrer, Laetitia; Curt, Corinne; Tacnet, Jean-Marc

    2018-04-01

    Major hazard prevention is a main challenge given that it is specifically based on information communicated to the public. In France, preventive information is notably provided by way of local regulatory documents. Unfortunately, the law requires only few specifications concerning their content; therefore one can question the impact on the general population relative to the way the document is concretely created. Ergo, the purpose of our work is to propose an analytical methodology to evaluate preventive risk communication document effectiveness. The methodology is based on dependability approaches and is applied in this paper to the Document d'Information Communal sur les Risques Majeurs (DICRIM; in English, Municipal Information Document on Major Risks). DICRIM has to be made by mayors and addressed to the public to provide information on major hazards affecting their municipalities. An analysis of law compliance of the document is carried out thanks to the identification of regulatory detection elements. These are applied to a database of 30 DICRIMs. This analysis leads to a discussion on points such as usefulness of the missing elements. External and internal function analysis permits the identification of the form and content requirements and service and technical functions of the document and its components (here its sections). Their results are used to carry out an FMEA (failure modes and effects analysis), which allows us to define the failure and to identify detection elements. This permits the evaluation of the effectiveness of form and content of each components of the document. The outputs are validated by experts from the different fields investigated. Those results are obtained to build, in future works, a decision support model for the municipality (or specialised consulting firms) in charge of drawing up documents.

  13. Analysis and evaluation of magnetism of black toners on documents printed by electrophotographic systems.

    Science.gov (United States)

    Biedermann, A; Bozza, S; Taroni, F; Fürbach, M; Li, B; Mazzella, W D

    2016-10-01

    This paper reports on a study to assess the potential of measurements of magnetism, using a proprietary magnetic analysis system, for the routine analysis of toners on documents printed by black and white electrophotographic systems. Magnetic properties of black toners on documents printed by a number of different devices were measured and compared. Our results indicate that the analysis of magnetism is complementary to traditional methods for analysing black toners, such as FTIR. Further, we find that the analysis of magnetism is realistically applicable in closed set cases, that is when the number of potential printing devices can be clearly defined. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Morphological segmentation for sagittal plane image analysis.

    Science.gov (United States)

    Bezerra, F N; Paula, I C; Medeiros, F S; Ushizima, D M; Cintra, L S

    2010-01-01

    This paper introduces a morphological image segmentation method by applying watershed transform with markers to scale-space smoothed images and furthermore provides images for clinical monitoring and analysis of patients. The database comprises sagittal plane images taken from a digital camera of patients submitted to Global Postural Reeducation (GPR) physiotherapy treatment. Orthopaedic specialists can use these segmented images to diagnose posture problems, assess physiotherapy treatment evolution and thus reduce diagnostic errors due to subjective analysis.

  15. Electronically signed documents in health care--analysis and assessment of data formats and transformation.

    Science.gov (United States)

    Hollerbach, A; Brandner, R; Bess, A; Schmücker, R; Bergh, B

    2005-01-01

    Our objectives were to analyze and assess data formats for their suitability for conclusive and secure long-term archiving and to develop a concept for legally secure transformation of electronically signed documents that are not available in data formats appropriate for long-term archiving. On the basis of literature review and Internet searches we developed general evaluation criteria to assess data formats with regard to their suitability for conclusive and secure long-term archiving. The assessment of data formats refers to format specifications and available literature. For the analyses of the transformation of signed documents we analyzed legal requirements on the basis of laws and ordinances as well as technical requirements by means of literature reviews, Internet searches and technical specifications. The following evaluation criteria are suited for this kind of assessment of data formats: transparency and standardization, stability, presentation and security. According to our assessment the following data formats are most suitable for conclusive and secure long-term archiving: PDF for formatted and unstructured text documents, XML for markup languages, TIFF for images in general, DICOM for medical images and S/MIME for the storage of e-mail. To transform electronically signed documents we propose an elementary procedure and universal basic model in form of an XML schema definition that includes the necessary legal and technical information. If electronic documents are to replace paper-based documents in patient records, they have to conform to the criteria for secure long-term archiving. The analyzed data formats are to be extended by mechanisms to guarantee the long-term security of electronic signatures. To transform large quantities of documents in a legally secure way, our basic model has to be extended for automated procedures.

  16. The critical path method to analyze and modify OR-workflow: integration of an image documentation system.

    Science.gov (United States)

    Endress, A; Aydeniz, B; Wallwiener, D; Kurek, R

    2006-01-01

    Intraoperative image documentation is becoming more and more important for quality management in medicine, in terms of forensic documentation, research and teaching. Up to now, no software-based OR-image-documentation system fits satisfactorily into an OR-workflow. The objective of this study is to transparently show system integration in a clinical workflow for evaluating demands on future system developments. An example of the OR-workflow is presented for the department of obstetrics and gynecology at the University of Tuebingen (Germany). Twelve representative gynecologic laparoscopic surgeries were analyzed by using the critical path method (CPM). CPM network diagrams are shown for an actual laparoscopic workflow and for a workflow including an OR-image-documentation system. With the objective not to increase the total time of actual workflow, the maximum system operation time can be calculated for each period of time. Before surgery the maximum system operation time is x(max) = 7,3 minutes. After surgery it has to be assumed that system operation will increase total workflow time. Using the CPM to analyze requirements for system integration in a medical workflow has not yet been investigated. It is an appropriate method to transparently show integration possibilities and to define workflow-based requirements for the development process of new systems.

  17. Adapting Spectral Co-clustering to Documents and Terms Using Latent Semantic Analysis

    Science.gov (United States)

    Park, Laurence A. F.; Leckie, Christopher A.; Ramamohanarao, Kotagiri; Bezdek, James C.

    Spectral co-clustering is a generic method of computing co-clusters of relational data, such as sets of documents and their terms. Latent semantic analysis is a method of document and term smoothing that can assist in the information retrieval process. In this article we examine the process behind spectral clustering for documents and terms, and compare it to Latent Semantic Analysis. We show that both spectral co-clustering and LSA follow the same process, using different normalisation schemes and metrics. By combining the properties of the two co-clustering methods, we obtain an improved co-clustering method for document-term relational data that provides an increase in the cluster quality of 33.0%.

  18. The practical implementation of integrated safety management for nuclear safety analysis and fire hazards analysis documentation

    International Nuclear Information System (INIS)

    COLLOPY, M.T.

    1999-01-01

    the integrated safety management system approach for having a uniform and consistent process: a method has been suggested by the U S . Department of Energy at Richland and the Project Hanford Procedures when fire hazard analyses and safety analyses are required. This process provides for a common basis approach in the development of the fire hazard analysis and the safety analysis. This process permits the preparers of both documents to jointly participate in the development of the hazard analysis process. This paper presents this method to implement the integrated safety management approach in the development of the fire hazard analysis and safety analysis that provides consistency of assumptions. consequences, design considerations, and other controls necessarily to protect workers, the public. and the environment

  19. Essential issues in the design of shared document/image libraries

    Science.gov (United States)

    Gladney, Henry M.; Mantey, Patrick E.

    1990-08-01

    We consider what is needed to create electronic document libraries which mimic physical collections of books, papers, and other media. The quantitative measures of merit for personal workstations-cost, speed, size of volatile and persistent storage-will improve by at least an order ofmagnitude in the next decade. Every professional worker will be able to afford a very powerful machine, but databases and libraries are not really economical and useful unless they are shared. We therefore see a two-tier world emerging, in which custodians of information make it available to network-attached workstations. A client-server model is the natural description of this world. In collaboration with several state governments, we have considered what would be needed to replace paper-based record management for a dozen different applications. We find that a professional worker can anticipate most data needs and that (s)he is interested in each clump of data for a period of days to months. We further find that only a small fraction of any collection will be used in any period. Given expected bandwidths, data sizes, search times and costs, and other such parameters, an effective strategy to support user interaction is to bring large clumps from their sources, to transform them into convenient representations, and only then start whatever investigation is intended. A system-managed hierarchy of caches and archives is indicated. Each library is a combination of a catalog and a collection, and each stored item has a primary instance which is the standard by which the correctness of any copy is judged. Catalog records mostly refer to 1 to 3 stored items. Weighted by the number of bytes to be stored, immutable data dominate collections. These characteristics affect how consistency, currency, and access control of replicas distributed in the network should be managed. We present the large features of a design for network docun1ent/image library services. A prototype is being built for

  20. Experimental analysis on classification of unmanned aerial vehicle images using the probabilistic latent semantic analysis

    Science.gov (United States)

    Yi, Wenbin; Tang, Hong

    2009-10-01

    In this paper, we present a novel algorithm to classify UAV images through the image annotation which is a semi-supervised method. During the annotation process, we first divide whole image into different sizes of blocks and generate suitable visual words which are the K-means clustering centers or just pixels in small size image block. Then, given a set of image blocks for each semantic concept as training data, learning is based on the Probabilistic Latent Semantic Analysis (PLSA). The probability distributions of visual words in every document can be learned through the PLSA model. The labeling of every document (image block) is done by computing the similarity of its feature distribution to the distribution of the training documents with the Kullback-Leibler (K-L) divergence. Finally, the classification of the UAV images will be done by combining all the image blocks in every block size. The UAV images using in our experiments was acquired during Sichuan earthquake in 2008. The results show that smaller size block image will get better classification results.

  1. A secret-sharing-based method for authentication of grayscale document images via the use of the PNG image with a data repair capability.

    Science.gov (United States)

    Lee, Che-Wei; Tsai, Wen-Hsiang

    2012-01-01

    A new blind authentication method based on the secret sharing technique with a data repair capability for grayscale document images via the use of the Portable Network Graphics (PNG) image is proposed. An authentication signal is generated for each block of a grayscale document image, which, together with the binarized block content, is transformed into several shares using the Shamir secret sharing scheme. The involved parameters are carefully chosen so that as many shares as possible are generated and embedded into an alpha channel plane. The alpha channel plane is then combined with the original grayscale image to form a PNG image. During the embedding process, the computed share values are mapped into a range of alpha channel values near their maximum value of 255 to yield a transparent stego-image with a disguise effect. In the process of image authentication, an image block is marked as tampered if the authentication signal computed from the current block content does not match that extracted from the shares embedded in the alpha channel plane. Data repairing is then applied to each tampered block by a reverse Shamir scheme after collecting two shares from unmarked blocks. Measures for protecting the security of the data hidden in the alpha channel are also proposed. Good experimental results prove the effectiveness of the proposed method for real applications.

  2. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    Science.gov (United States)

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  3. Anima: Modular workflow system for comprehensive image data analysis

    Directory of Open Access Journals (Sweden)

    Ville eRantanen

    2014-07-01

    Full Text Available Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and preprocessing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis, and it contains several features that are crucial in high-throughput image data analysis: programming language independence, batch processing, easily customized data processing, interoperability with other software via application programming interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environmens. Anima is fully open source and available with documentation at http://www.anduril.org/anima

  4. In-air PIXE set-up for automatic analysis of historical document inks

    Science.gov (United States)

    Budnar, Miloš; Simčič, Jure; Rupnik, Zdravko; Uršič, Mitja; Pelicon, Primož; Kolar, Jana; Strlič, Matija

    2004-06-01

    The iron gall inks were one of the writing materials mostly applied in historical documents of the western civilization. Due to the ink corrosive character, the documents are faced with a danger of being seriously, and in some cases also irreversibly changed. The elemental composition of the inks is an important information for taking the adequate conservation action [Project InkCor, http://www.infosrvr.nuk.uni-lj.si/jana/Inkcor/index.htm, and references within]. Here, the in-air PIXE analysis offers an indispensable tool due to its sensitivity and almost non-destructive character. An experimental approach developed for precise and automatic analysis of documents at Jožef Stefan Institute Tandetron accelerator is presented. The selected documents were mounted, one at the time, on the positioning board and the chosen ink spots on the sample were irradiated by 1.7 MeV protons. The data acquisition on the selected ink spots is done automatically throughout the measuring pattern determined prior to the measurement. The chemical elements identified in the documents ranged from Si to Pb, and between them the significant iron gall ink components like Fe, S, K, Cu, Zn, Co, Mn, Ni were deduced with precision of ±10%. The measurements were done non-destructively and no visible damage was observed on the irradiated documents.

  5. Image analysis in industrial radiography

    International Nuclear Information System (INIS)

    Lavayssiere, B.

    1993-01-01

    Non-destructive testing in nuclear power plants remains a major EDF objective for the coming decades. To facilitate diagnosis, the expert must be provided with elaborate decision-making aids: contrasted images, noise-free signals, pertinent parameters, ''meaningful'' images. In the field of industrial radiography, the inspector's offer of a portable system for digitalization and subsequent processing of radiographs (ENTRAIGUES) is an improvement in the inspection of primary circuit nozzles. Three major directions were followed: - improvement of images and localization of flaws (2D approach); techniques such as Markov modelling were evaluated and tested, - development of a system which can be transported on site, for digitalization, processing and subsequent archiving on inspection radiographs, known as ENTRAIGUES, - development of a program for aid in analysis of digitized radiographs (''bread-board'' version), offering an ergonomic interface and push-button processing, which is the software component in ENTRAIGUES and uses sophisticated methods: contrast enhancement, background flattening, segmentation. An other objective is to reconstruct a three-dimensional volume on the basis of a few radiographs taken at different incidences and to estimate the flaw orientation within a piece understudy. This information makes sense to experts, with regards to the deterioration rate of the flaw; the equipment concerned includes the formed bends in the primary coolant nozzles. This reconstruction problem is ill-posed and a solution can be obtained by introducing a priori information on the solution. The first step of our algorithm is a classical iterative reconstruction A.R.T. type method (Algebraic Reconstruction Techniques) which provides a rough volumic reconstructed tridimensional zone containing the flaw. Then, on this reconstructed zone, we apply a Bayesian restoration method introducing a Markov Random Field (MRF) modelling. Conclusive results have been obtained. (author

  6. Energy analysis handbook. CAC document 214. [Combining process analysis with input-output analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bullard, C. W.; Penner, P. S.; Pilati, D. A.

    1976-10-01

    Methods are presented for calculating the energy required, directly and indirectly, to produce all types of goods and services. Procedures for combining process analysis with input-output analysis are described. This enables the analyst to focus data acquisition cost-effectively, and to achieve a specified degree of accuracy in the results. The report presents sample calculations and provides the tables and charts needed to perform most energy cost calculations, including the cost of systems for producing or conserving energy.

  7. KAFE - A Flexible Image Analysis Tool

    Science.gov (United States)

    Burkutean, Sandra

    2017-11-01

    We present KAFE, the Keywords of Astronomical FITS-Images Explorer - a web-based FITS images post-processing analysis tool designed to be applicable in the radio to sub-mm wavelength domain. KAFE was developed to enrich selected FITS files with metadata based on a uniform image analysis approach as well as to provide advanced image diagnostic plots. It is ideally suited for data mining purposes and multi-wavelength/multi-instrument data samples that require uniform data diagnostic criteria.

  8. Detector systems for imaging neutron activation analysis

    International Nuclear Information System (INIS)

    Dewaraja, Y.K.; Fleming, R.F.

    1994-01-01

    This paper compares the performance of two imaging detector systems for the new technique of Imaging Neutron Activation Analysis (Imaging NAA). The first system is based on secondary electron imaging, and the second employs a position sensitive charged particle detector for direct localization of beta particles. The secondary electron imaging system has demonstrated a position resolution of 20 μm. The position sensitive beta detector has the potential for higher efficiencies with resolution being a trade off. Results presented show the feasibility of the two imaging methods for different applications of Imaging NAA

  9. Violência contra os idosos: análise documental Violencia contra los ancianos: análisis documental Violence against the aged: documental analysis

    Directory of Open Access Journals (Sweden)

    Jacy Aurélia Vieira de Souza

    2007-06-01

    Full Text Available O objetivo do estudo foi analisar os dados de violência e maus-tratos contra os idosos por meio de documentos oficiais, em Fortaleza-CE. Estudo retrospectivo, documental realizada em uma instituição de referência do Ceará, oficializada em denúncias de violência contra idosos. A coleta de dados ocorreu no primeiro semestre de 2005. Dos 424 documentos analisados, 284(67% identificou-se como abandono dos idosos. Quanto ao agressor, 207(49% era filho da vítima. Dentre os casos de violências, 161 (38% foi negligência, seguido por apropriação indébita de aposentadoria, 114 (27%; agressão verbal, 79(19% e física 68(16%. Tais eventos foram registrados por meio de denúncias, principalmente, ao serviço Alô-Idoso, 306(77%. Pôde-se verificar a importância de serviços voltados para essa questão, porém torna-se fundamental que políticas públicas enfoquem o papel social do idoso e privilegiem o cuidado e a proteção dessa parcela populacional em suas famílias e instituições.Lo estudio tuvo como objetivo analizar los datos sobre violencia contra los ancianos en la contexto familiar. Se realizó un estudio documental retrospectivo en una institución de referencia de Ceará, con oficialización en reclamaciones de violencia contra los ancianos. Se coletaron los datos en enero a julio de 2005. Se compilaron 424 datos. Se constató que 284 (67%, caso de violencia ocurrieron a los ancianos de sexo femenino. Con relación a los agresores, 207 (49%, eran hijos de las víctimas. Entre los tipos de violencia, 161(38% son negligencia, 114 (27%, la apropiación indebida de lo jubilación; la agresión verbal, 79 (19%, y la agresión físico, 68 (16%. Se ha podido identificar la importancia de servicio a cerca de la cuestión, no obstante, se resulta fundamental que políticas públicas de ese porcentaje de la población proyecten la función social del anciano, a sí convalorar el cuidado y la protección de ese porcentaje poblacional en sus

  10. A POCS Algorithm Based on Text Features for the Reconstruction of Document Images at Super-Resolution

    Directory of Open Access Journals (Sweden)

    Fengmei Liang

    2016-09-01

    Full Text Available In order to address the problem of the uncertainty of existing noise models and of the complexity and changeability of the edges and textures of low-resolution document images, this paper presents a projection onto convex sets (POCS algorithm based on text features. The current method preserves the edge details and smooths the noise in text images by adding text features as constraints to original POCS algorithms and converting the fixed threshold to an adaptive one. In this paper, the optimized scale invariant feature transform (SIFT algorithm was used for the registration of continuous frames, and finally the image was reconstructed under the improved POCS theoretical framework. Experimental results showed that the algorithm can significantly smooth the noise and eliminate noise caused by the shadows of the lines. The lines of the reconstructed text are smoother and the stroke contours of the reconstructed text are clearer, and this largely eliminates the text edge vibration to enhance the resolution of the document image text.

  11. Active research fields in anesthesia: a document co-citation analysis of the anesthetic literature.

    Science.gov (United States)

    Jankovic, Milan P; Kaufmann, Mark; Kindler, Christoph H

    2008-05-01

    The expansion of science has resulted in an increased information flow and in an exponentially growing number of connections between knowledge in different research fields. In this study, we used methods of scientometric analysis to obtain a conceptual network that forms the structure of active scientific research fields in anesthesia. We extracted from the Web of Science (Institute for Scientific Information) all original articles (n = 3275) including their references (n = 79,972) that appeared in 2003 in all 23 journals listed in the Institute for Scientific Information Journal Citation Reports under the subject heading "Anesthesiology." After identification of highly cited references (> or = 5), pairs of co-cited references were created and grouped into uniformly structured clusters of documents using a single linkage and variable level clustering method. In addition, for each such cluster of documents, we identified corresponding front papers published in 2003, each of which co-cited at least two documents of the cluster core. Active anesthetic research fields were then named by examining the titles of the documents in both the established clusters and in their corresponding front papers. These research fields were sorted according to the proportion of recent documents in their cluster core (immediacy index) and were further analyzed. Forty-six current anesthetic research fields were identified. The research field named "ProSeal laryngeal mask airway" showed the highest immediacy index (100%) whereas the research fields "Experimental models of neuropathic pain" and "Volatile anesthetic-induced cardioprotection" exhibited the highest level of co-citation strength (level 9). The research field with the largest cluster core, containing 12 homogeneous papers, was "Postoperative nausea and vomiting." The journal Anesthesia & Analgesia published most front papers while Anesthesiology published most of the fundamental documents used as references in the front papers

  12. Documentation and Analysis of Children's Experience: An Ongoing Collegial Activity for Early Childhood Professionals

    Science.gov (United States)

    Picchio, Mariacristina; Giovannini, Donatella; Mayer, Susanna; Musatti, Tullia

    2012-01-01

    Systematic documentation and analysis of educational practice can be a powerful tool for continuous support to the professionalism of early childhood education practitioners. This paper discusses data from a three-year action-research initiative carried out by a research agency in collaboration with a network of Italian municipal "nido"…

  13. Spent nuclear fuel project - criteria document spent nuclear fuel final safety analysis report

    International Nuclear Information System (INIS)

    MORGAN, R.G.

    1999-01-01

    The criteria document provides the criteria and planning guidance for developing the Spent Nuclear Fuel (SNF) Final Safety Analysis Report (FSAR). This FSAR will support the US Department of Energy, Richland Operations Office decision to authorize the procurement, installation, installation acceptance testing, startup, and operation of the SNF Project facilities (K Basins, Cold Vacuum Drying Facility, and Canister Storage Building)

  14. The Role of Business Agreements in Defining Textbook Affordability and Digital Materials: A Document Analysis

    Science.gov (United States)

    Raible, John; deNoyelles, Aimee

    2015-01-01

    Adopting digital materials such as eTextbooks and e-coursepacks is a potential strategy to address textbook affordability in the United States. However, university business relationships with bookstore vendors implicitly structure which instructional resources are available and in what manner. In this study, a document analysis was conducted on…

  15. CAED Document Repository

    Science.gov (United States)

    Compliance Assurance and Enforcement Division Document Repository (CAEDDOCRESP) provides internal and external access of Inspection Records, Enforcement Actions, and National Environmental Protection Act (NEPA) documents to all CAED staff. The respository will also include supporting documents, images, etc.

  16. An Implementation of Document Image Reconstruction System on a Smart Device Using a 1D Histogram Calibration Algorithm

    Directory of Open Access Journals (Sweden)

    Lifeng Zhang

    2014-01-01

    Full Text Available In recent years, the smart devices equipped with imaging functions are widely spreading for consumer application. It is very convenient for people to record information using these devices. For example, people can photo one page of a book in a library or they can capture an interesting piece of news on the bulletin board when walking on the street. But sometimes, one shot full area image cannot give a sufficient resolution for OCR soft or for human visual recognition. Therefore, people would prefer to take several partial character images of a readable size and then stitch them together in an efficient way. In this study, we propose a print document acquisition method using a device with a video camera. A one-dimensional histogram based self-calibration algorithm is developed for calibration. Because the calculation cost is low, it can be installed on a smartphone. The simulation result shows that the calibration and stitching are well performed.

  17. Comparisons of images simultaneously documented by digital subtraction coronary arteriography and cine coronary arteriography

    International Nuclear Information System (INIS)

    Kimura, Koji; Takamiya, Makoto; Yamamoto, Kazuo; Ohta, Mitsushige; Naito, Hiroaki

    1988-01-01

    Using an angiography apparatus capable of simultaneously processing digital subtraction angiograms and cine angiograms, the diagnostic capabilities of both methods for the coronary arteries (DSCAG and Cine-CAG) were compared. Twenty stenotic lesions of the coronary arteries of 11 patients were evaluated using both modalities. The severity of stenosis using DSCAG with a 512x512x8 bit matrix was semiautomatically measured on the cathode ray tube (CRT) based on enlarged images on the screen of a Vanguard cine projector which were of the same size as those of or 10 times larger than images of Cine-CAG. The negative and positive hard copies of DSCAG images were also compared with those of Cine-CAG. The correlation coefficients of the severity of stenosis by DSCAG and Cine-CAG were as follows: (1) the same size DSCAG images on CRT to Cine-CAG, 0.95, (2) 10 times enlarged DSCAG images on CRT to Cine-CAG, 0.96, and (3) the same size DSCAG images on negative and positive hard copies to Cine-CAG, 0.97. The semiautomatically measured values of 10 times enlarged DSCAG images on CRT and the manually measured values of the same size negative and positive DSCAG images in hard copy closely correlated with the values measured using Cine-CAG. When the liver was superimposed in the long-axis projection, the diagnostic capabilities of DSCAG and Cine-CAG were compared. The materials included 10 left coronary arteriograms and 11 right coronary arteriograms. Diagnostically, DSCAG was more useful than Cine-CAG in the long-axis projection. (author)

  18. Content analysis to detect high stress in oral interviews and text documents

    Science.gov (United States)

    Thirumalainambi, Rajkumar (Inventor); Jorgensen, Charles C. (Inventor)

    2012-01-01

    A system of interrogation to estimate whether a subject of interrogation is likely experiencing high stress, emotional volatility and/or internal conflict in the subject's responses to an interviewer's questions. The system applies one or more of four procedures, a first statistical analysis, a second statistical analysis, a third analysis and a heat map analysis, to identify one or more documents containing the subject's responses for which further examination is recommended. Words in the documents are characterized in terms of dimensions representing different classes of emotions and states of mind, in which the subject's responses that manifest high stress, emotional volatility and/or internal conflict are identified. A heat map visually displays the dimensions manifested by the subject's responses in different colors, textures, geometric shapes or other visually distinguishable indicia.

  19. Microscopy image segmentation tool: Robust image data analysis

    International Nuclear Information System (INIS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-01-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy

  20. FEASIBILITY STUDY OF LOW-COST IMAGE-BASED HERITAGE DOCUMENTATION IN NEPAL

    OpenAIRE

    H. K. Dhonju; W. Xiao; V. Sarhosis; J. P. Mills; S. Wilkinson; Z. Wang; L. Thapa; U. S. Panday

    2017-01-01

    Cultural heritage structural documentation is of great importance in terms of historical preservation, tourism, educational and spiritual values. Cultural heritage across the world, and in Nepal in particular, is at risk from various natural hazards (e.g. earthquakes, flooding, rainfall etc), poor maintenance and preservation, and even human destruction. This paper evaluates the feasibility of low-cost photogrammetric modelling cultural heritage sites, and explores the practicality o...

  1. Analysis of laser and inkjet prints using spectroscopic methods for forensic identification of questioned documents

    OpenAIRE

    Gál, Lukáš; Belovičová, Michaela; Oravec, Michal; Palková, Miroslava; Čeppan, Michal

    2013-01-01

    The spectral properties in UV-VIS-NIR and IR regions of laser and inkjet prints were studied for the purposes of forensic analysis of documents. The procedures of measurements and processing of spectra of printed documents using fibre optics reflectance spectroscopy in UV-VIS and NIR region, FTIR-ATR with diamond/ZnSe and germanium crystals were optimized. It was found that the shapes of spectra of various black laser jet prints and inkjet prints generally differ in the spectral regions...

  2. Information granules in image histogram analysis.

    Science.gov (United States)

    Wieclawek, Wojciech

    2017-05-10

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Retinal image analysis: preprocessing and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Marrugo, Andres G; Millan, Maria S, E-mail: andres.marrugo@upc.edu [Grup d' Optica Aplicada i Processament d' Imatge, Departament d' Optica i Optometria Univesitat Politecnica de Catalunya (Spain)

    2011-01-01

    Image processing, analysis and computer vision techniques are found today in all fields of medical science. These techniques are especially relevant to modern ophthalmology, a field heavily dependent on visual data. Retinal images are widely used for diagnostic purposes by ophthalmologists. However, these images often need visual enhancement prior to apply a digital analysis for pathological risk or damage detection. In this work we propose the use of an image enhancement technique for the compensation of non-uniform contrast and luminosity distribution in retinal images. We also explore optic nerve head segmentation by means of color mathematical morphology and the use of active contours.

  4. Application of multi-resolution 3D techniques in crime scene documentation with bloodstain pattern analysis.

    Science.gov (United States)

    Hołowko, Elwira; Januszkiewicz, Kamil; Bolewicki, Paweł; Sitnik, Robert; Michoński, Jakub

    2016-10-01

    In forensic documentation with bloodstain pattern analysis (BPA) it is highly desirable to obtain non-invasively overall documentation of a crime scene, but also register in high resolution single evidence objects, like bloodstains. In this study, we propose a hierarchical 3D scanning platform designed according to the top-down approach known from the traditional forensic photography. The overall 3D model of a scene is obtained via integration of laser scans registered from different positions. Some parts of a scene being particularly interesting are documented using midrange scanner, and the smallest details are added in the highest resolution as close-up scans. The scanning devices are controlled using developed software equipped with advanced algorithms for point cloud processing. To verify the feasibility and effectiveness of multi-resolution 3D scanning in crime scene documentation, our platform was applied to document a murder scene simulated by the BPA experts from the Central Forensic Laboratory of the Police R&D, Warsaw, Poland. Applying the 3D scanning platform proved beneficial in the documentation of a crime scene combined with BPA. The multi-resolution 3D model enables virtual exploration of a scene in a three-dimensional environment, distance measurement, and gives a more realistic preservation of the evidences together with their surroundings. Moreover, high-resolution close-up scans aligned in a 3D model can be used to analyze bloodstains revealed at the crime scene. The result of BPA such as trajectories, and the area of origin are visualized and analyzed in an accurate model of a scene. At this stage, a simplified approach considering the trajectory of blood drop as a straight line is applied. Although the 3D scanning platform offers a new quality of crime scene documentation with BPA, some of the limitations of the technique are also mentioned. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. The Analysis of Heterogeneous Text Documents with the Help of the Computer Program NUD*IST

    Directory of Open Access Journals (Sweden)

    Christine Plaß

    2000-12-01

    Full Text Available On the basis of a current research project we discuss the use of the computer program NUD*IST for the analysis and archiving of qualitative documents. Our project examines the social evaluation of spectacular criminal offenses and we identify, digitize and analyze documents from the entire 20th century. Since public and scientific discourses are examined, the data of the project are extraordinarily heterogeneous: scientific publications, court records, newspaper reports, and administrative documents. We want to show how to transfer general questions into a systematic categorization with the assistance of NUD*IST. Apart from the functions, possibilities and limitations of the application of NUD*IST, concrete work procedures and difficulties encountered are described. URN: urn:nbn:de:0114-fqs0003211

  6. Analysis of Documents Published in Scopus Database on Foreign Language Learning through Mobile Learning: A Content Analysis

    Science.gov (United States)

    Uzunboylu, Huseyin; Genc, Zeynep

    2017-01-01

    The purpose of this study is to determine the recent trends in foreign language learning through mobile learning. The study was conducted employing document analysis and related content analysis among the qualitative research methodology. Through the search conducted on Scopus database with the key words "mobile learning and foreign language…

  7. Forensic intelligence applied to questioned document analysis: A model and its application against organized crime.

    Science.gov (United States)

    De Alcaraz-Fossoul, Josep; Roberts, Katherine A

    2017-07-01

    The capability of forensic sciences to fight crime, especially against organized criminal groups, becomes relevant in the recent economic downturn and the war on terrorism. In view of these societal challenges, the methods of combating crime should experience critical changes in order to improve the effectiveness and efficiency of the current resources available. It is obvious that authorities have serious difficulties combating criminal groups of transnational nature. These are characterized as well structured organizations with international connections, abundant financial resources and comprised of members with significant and diverse expertise. One common practice among organized criminal groups is the use of forged documents that allow for the commission of illegal cross-border activities. Law enforcement can target these movements to identify counterfeits and establish links between these groups. Information on document falsification can become relevant to generate forensic intelligence and to design new strategies against criminal activities of this nature and magnitude. This article discusses a methodology for improving the development of forensic intelligence in the discipline of questioned document analysis. More specifically, it focuses on document forgeries and falsification types used by criminal groups. It also describes the structure of international criminal organizations that use document counterfeits as means to conduct unlawful activities. The model presented is partially based on practical applications of the system that have resulted in satisfactory outcomes in our laboratory. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  8. Analysis of Standards and Specific Documentation about Equipment of Dimensional Metrology

    Science.gov (United States)

    Martin, M. J.; Flores, I.; Sebastian, M. A.

    2009-11-01

    Currently the certification of quality systems and accreditation of laboratories for metrology and testing are activities of great interest within the framework of advanced production systems. In this context, the availability of standardized documents, complete and efficient as well as specific documents edited by agencies with experience in this field, is especially important to obtain better results and lower costs. This work tries to establish the foundations to evaluate a documental system about equipment of Dimensional Metrology. An integrated and complete system does not exist in international field, so the Spanish case will be analyzed as example to the general study. In this paper we consider three types of instruments commonly used in the field of Dimensional Metrology (vernier calliper, micrometer calliper and mechanical dial gauge) and are passed to analyze the contents of UNE standards that affect them directly, and the two collections of documents produced and edited by the Centro Español de Metrología (CEM), such as "calibration procedures" and "use manuals." Given the results of this analysis, a discussion on the metrological characteristics of the contents of the document in question is developed and recommendations for their use and improvement are proposed.

  9. Promotion of physical activity in the European region: content analysis of 27 national policy documents

    DEFF Research Database (Denmark)

    Daugbjerg, Signe B; Kahlmeier, Sonja; Racioppi, Francesca

    2009-01-01

    search methods, 49 national policy documents on physical activity promotion were identified. An analysis grid covering key features was developed for the analysis of the 27 documents published in English. RESULTS: Analysis showed that many general recommendations for policy developments are being......BACKGROUND: Over the past years there has been increasing interest in physical activity promotion and the development of appropriate policy. So far, there has been no comprehensive overview of the activities taking place in Europe in this area of public health policy. METHODS: Using different...... followed, for example: general goals were formulated, an implementation plan was included, a timeframe and a responsible body for the implementation was often specified. However, limited evidence for intersectoral collaboration was found. Quantified goals for physical activity were the exception...

  10. Some developments in multivariate image analysis

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    Multivariate image analysis (MIA), one of the successful chemometric applications, now is used widely in different areas of science and industry. Introduced in late 80s it has became very popular with hyperspectral imaging, where MIA is one of the most efficient tools for exploratory analysis...... and classification. MIA considers all image pixels as objects and their color values (or spectrum in the case of hyperspectral images) as variables. So it gives data matrices with hundreds of thousands samples in the case of laboratory scale images and even more for aerial photos, where the number of pixels could...... for and analyzing patterns on these plots and the original image allow to do interactive analysis, to get some hidden information, build a supervised classification model, and much more. In the present work several alternative methods to original principal component analysis (PCA) for building the projection...

  11. From imaging to reimbursement: what the pediatric radiologist needs to know about health care payers, documentation, coding and billing.

    Science.gov (United States)

    Chung, Chul Y; Alson, Mark D; Duszak, Richard; Degnan, Andrew J

    2018-03-19

    Medical coding and billing processes in the United States are complex, cumbersome and poorly understood by radiologists. Despite the direct implications of radiology documentation on reimbursement, trainees and practicing radiologists typically receive limited relevant training. This article summarizes the payer structure including the state-based Children's Health Insurance Programs, discusses the essential processes by which radiologists request and receive reimbursement, details the mechanisms of coding diagnoses using International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) codes and imaging services using Current Procedural Terminology (CPT) and Healthcare Common Procedure Coding System (HCPCS) codes, and explores reimbursement and coding-related issues specific to pediatric radiology. Appropriate documentation, informed by knowledge of coding, billing and reimbursement fundamentals, facilitates appropriate payment for clinically relevant services provided by pediatric radiologists.

  12. SUSHI: an exquisite recipe for fully documented, reproducible and reusable NGS data analysis.

    Science.gov (United States)

    Hatakeyama, Masaomi; Opitz, Lennart; Russo, Giancarlo; Qi, Weihong; Schlapbach, Ralph; Rehrauer, Hubert

    2016-06-02

    Next generation sequencing (NGS) produces massive datasets consisting of billions of reads and up to thousands of samples. Subsequent bioinformatic analysis is typically done with the help of open source tools, where each application performs a single step towards the final result. This situation leaves the bioinformaticians with the tasks to combine the tools, manage the data files and meta-information, document the analysis, and ensure reproducibility. We present SUSHI, an agile data analysis framework that relieves bioinformaticians from the administrative challenges of their data analysis. SUSHI lets users build reproducible data analysis workflows from individual applications and manages the input data, the parameters, meta-information with user-driven semantics, and the job scripts. As distinguishing features, SUSHI provides an expert command line interface as well as a convenient web interface to run bioinformatics tools. SUSHI datasets are self-contained and self-documented on the file system. This makes them fully reproducible and ready to be shared. With the associated meta-information being formatted as plain text tables, the datasets can be readily further analyzed and interpreted outside SUSHI. SUSHI provides an exquisite recipe for analysing NGS data. By following the SUSHI recipe, SUSHI makes data analysis straightforward and takes care of documentation and administration tasks. Thus, the user can fully dedicate his time to the analysis itself. SUSHI is suitable for use by bioinformaticians as well as life science researchers. It is targeted for, but by no means constrained to, NGS data analysis. Our SUSHI instance is in productive use and has served as data analysis interface for more than 1000 data analysis projects. SUSHI source code as well as a demo server are freely available.

  13. Annotating image ROIs with text descriptions for multimodal biomedical document retrieval

    Science.gov (United States)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Regions of interest (ROIs) that are pointed to by overlaid markers (arrows, asterisks, etc.) in biomedical images are expected to contain more important and relevant information than other regions for biomedical article indexing and retrieval. We have developed several algorithms that localize and extract the ROIs by recognizing markers on images. Cropped ROIs then need to be annotated with contents describing them best. In most cases accurate textual descriptions of the ROIs can be found from figure captions, and these need to be combined with image ROIs for annotation. The annotated ROIs can then be used to, for example, train classifiers that separate ROIs into known categories (medical concepts), or to build visual ontologies, for indexing and retrieval of biomedical articles. We propose an algorithm that pairs visual and textual ROIs that are extracted from images and figure captions, respectively. This algorithm based on dynamic time warping (DTW) clusters recognized pointers into groups, each of which contains pointers with identical visual properties (shape, size, color, etc.). Then a rule-based matching algorithm finds the best matching group for each textual ROI mention. Our method yields a precision and recall of 96% and 79%, respectively, when ground truth textual ROI data is used.

  14. Geometric error analysis for shuttle imaging spectrometer experiment

    Science.gov (United States)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  15. [The hierarchical clustering analysis of hyperspectral image based on probabilistic latent semantic analysis].

    Science.gov (United States)

    Yi, Wen-Bin; Shen, Li; Qi, Yin-Feng; Tang, Hong

    2011-09-01

    The paper introduces the Probabilistic Latent Semantic Analysis (PLSA) to the image clustering and an effective image clustering algorithm using the semantic information from PLSA is proposed which is used for hyperspectral images. Firstly, the ISODATA algorithm is used to obtain the initial clustering result of hyperspectral image and the clusters of the initial clustering result are considered as the visual words of the PLSA. Secondly, the object-oriented image segmentation algorithm is used to partition the hyperspectral image and segments with relatively pure pixels are regarded as documents in PLSA. Thirdly, a variety of identification methods which can estimate the best number of cluster centers is combined to get the number of latent semantic topics. Then the conditional distributions of visual words in topics and the mixtures of topics in different documents are estimated by using PLSA. Finally, the conditional probabilistic of latent semantic topics are distinguished using statistical pattern recognition method, the topic type for each visual in each document will be given and the clustering result of hyperspectral image are then achieved. Experimental results show the clusters of the proposed algorithm are better than K-MEANS and ISODATA in terms of object-oriented property and the clustering result is closer to the distribution of real spatial distribution of surface.

  16. A Document-Driven Method for Certifying Scientific Computing Software for Use in Nuclear Safety Analysis

    Directory of Open Access Journals (Sweden)

    W. Spencer Smith

    2016-04-01

    Full Text Available This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuelpin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification.

  17. A document-driven method for certifying scientific computing software for use in nuclear safety analysis

    International Nuclear Information System (INIS)

    Smith, W. Spencer; Koothoor, Mimitha

    2016-01-01

    This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuel pin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification

  18. The medical analysis of child sexual abuse images.

    Science.gov (United States)

    Cooper, Sharon W

    2011-11-01

    Analysis of child sexual abuse images, commonly referred to as pornography, requires a familiarity with the sexual maturation rating of children and an understanding of growth and development parameters. This article explains barriers that exist in working in this area of child abuse, the differences between subjective and objective analyses, methods used in working with this form of contraband, and recommendations that analysts document their findings in a format that allows for verbal descriptions of the images so that the content will be reflected in legal proceedings should there exist an aversion to visual review. Child sexual abuse images are a digital crime scene, and analysis requires a careful approach to assure that all victims may be identified.

  19. In situ analysis of historical documents through a portable system of X RF

    International Nuclear Information System (INIS)

    Ruvalcaba S, J.L.; Gonzalez T, C.

    2005-01-01

    From the analysis of the documents and ancient books, the chronology of documents, the use of materials (paper, parchment, inks, pigments) and deterioration, among others aspects may be determined. Usually it is difficult to bring the object to the laboratory for analysis and it is not possible to sample (even small portions). Due to the importance of the documents characterization, it is necessary to carry out a diagnostic analysis at the library in order to establish the general nature of the materials (organic or inorganic), the main composition of inks and pigments, actual and possible deterioration. From this point of view, X-ray fluorescence analysis (X RF) with a portable system, may be used for quick non-destructive elemental composition determinations. A X RF system was specially developed at the Physics Institute (UNAM) for these purposes and it may be used out of the laboratory in libraries and museums. In this work, our X RF methodology is described and the study of inks of manuscripts from 15 Th and 16 Th centuries belonging to the National Anthropology and History Library is presented. (Author)

  20. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  1. Solar Image Analysis and Visualization

    CERN Document Server

    Ireland, J

    2009-01-01

    This volume presents a selection of papers on the state of the art of image enhancement, automated feature detection, machine learning, and visualization tools in support of solar physics that focus on the challenges presented by new ground-based and space-based instrumentation. The articles and topics were inspired by the Third Solar Image Processing Workshop, held at Trinity College Dublin, Ireland but contributions from other experts have been included as well. This book is mainly aimed at researchers and graduate students working on image processing and compter vision in astronomy and solar physics.

  2. DOCUMENTATION OF HISTORICAL UNDERGROUND OBJECT IN SKORKOV VILLAGE WITH SELECTED MEASURING METHODS, DATA ANALYSIS AND VISUALIZATION

    Directory of Open Access Journals (Sweden)

    A. Dlesk

    2016-06-01

    Full Text Available The author analyzes current methods of 3D documentation of historical tunnels in Skorkov village, which lies at the Jizera river, approximately 30 km away from Prague. The area is known as a former military camp from Thirty Years’ War in 17th Century. There is an extensive underground compound with one entrance corridor and two transverse, situated approximately 2 to 5 m under the local development. The object has been partly documented by geodetic polar method, intersection photogrammetry, image-based modelling and laser scanning. Data have been analyzed and methods have been compared. Then the 3D model of object has been created and compound with cadastral data, orthophoto, historical maps and digital surface model which was made by photogrammetric method using remotely piloted aircraft system. Then the measuring has been realized with ground penetrating radar. Data have been analyzed and the result compared with real status. All the data have been combined and visualized into one 3D model. Finally, the discussion about advantages and disadvantages of used measuring methods has been livened up. The tested methodology has been also used for other documentation of historical objects in this area. This project has been created as a part of research at EuroGV. s.r.o. Company lead by Ing. Karel Vach CSc. in cooperation with prof. Dr. Ing. Karel Pavelka from Czech Technical University in Prague and Miloš Gavenda, the renovator.

  3. D Imaging for Museum Artefacts: a Portable Test Object for Heritage and Museum Documentation of Small Objects

    Science.gov (United States)

    Hess, M.; Robson, S.

    2012-07-01

    3D colour image data generated for the recording of small museum objects and archaeological finds are highly variable in quality and fitness for purpose. Whilst current technology is capable of extremely high quality outputs, there are currently no common standards or applicable guidelines in either the museum or engineering domain suited to scientific evaluation, understanding and tendering for 3D colour digital data. This paper firstly explains the rationale towards and requirements for 3D digital documentation in museums. Secondly it describes the design process, development and use of a new portable test object suited to sensor evaluation and the provision of user acceptance metrics. The test object is specifically designed for museums and heritage institutions and includes known surface and geometric properties which support quantitative and comparative imaging on different systems. The development for a supporting protocol will allow object reference data to be included in the data processing workflow with specific reference to conservation and curation.

  4. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    Science.gov (United States)

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  5. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    Directory of Open Access Journals (Sweden)

    D. Abate

    2014-06-01

    Full Text Available The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc., as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc., can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy. The goal of the project is the multi-temporal 3D documentation and monitoring of paintings – at the moment in bad conservation’s situation - and the provision of some metrics to quantify the deformations and damages.

  6. Airborne imaging for heritage documentation using the Fotokite tethered flying camera

    Science.gov (United States)

    Verhoeven, Geert; Lupashin, Sergei; Briese, Christian; Doneus, Michael

    2014-05-01

    Since the beginning of aerial photography, researchers used all kinds of devices (from pigeons, kites, poles, and balloons to rockets) to take still cameras aloft and remotely gather aerial imagery. To date, many of these unmanned devices are still used for what has been referred to as Low-Altitude Aerial Photography or LAAP. In addition to these more traditional camera platforms, radio-controlled (multi-)copter platforms have recently added a new aspect to LAAP. Although model airplanes have been around for several decades, the decreasing cost, increasing functionality and stability of ready-to-fly multi-copter systems has proliferated their use among non-hobbyists. As such, they became a very popular tool for aerial imaging. The overwhelming amount of currently available brands and types (heli-, dual-, tri-, quad-, hexa-, octo-, dodeca-, deca-hexa and deca-octocopters), together with the wide variety of navigation options (e.g. altitude and position hold, waypoint flight) and camera mounts indicate that these platforms are here to stay for some time. Given the multitude of still camera types and the image quality they are currently capable of, endless combinations of low- and high-cost LAAP solutions are available. In addition, LAAP allows for the exploitation of new imaging techniques, as it is often only a matter of lifting the appropriate device (e.g. video cameras, thermal frame imagers, hyperspectral line sensors). Archaeologists were among the first to adopt this technology, as it provided them with a means to easily acquire essential data from a unique point of view, whether for simple illustration purposes of standing historic structures or to compute three-dimensional (3D) models and orthophotographs from excavation areas. However, even very cheap multi-copters models require certain skills to pilot them safely. Additionally, malfunction or overconfidence might lift these devices to altitudes where they can interfere with manned aircrafts. As such, the

  7. Global Nursing Issues and Development: Analysis of World Health Organization Documents.

    Science.gov (United States)

    Wong, Frances Kam Yuet; Liu, Huaping; Wang, Hui; Anderson, Debra; Seib, Charrlotte; Molasiotis, Alex

    2015-11-01

    To analyze World Health Organization (WHO) documents to identify global nursing issues and development. Qualitative content analysis. Documents published by the six WHO regions between 2007 and 2012 and with key words related to nurse/midwife or nursing/midwifery were included. Themes, categories, and subcategories were derived. The final coding reached 80% agreement among three independent coders, and the final coding for the discrepant coding was reached by consensus. Thirty-two documents from the regions of Europe (n = 19), the Americas (n = 6), the Western Pacific (n = 4), Africa (n = 1), the Eastern Mediterranean (n = 1), and Southeast Asia (n = 1) were examined. A total of 385 units of analysis dispersed in 31 subcategories under four themes were derived. The four themes derived (number of unit of analysis, %) were Management & Leadership (206, 53.5), Practice (75, 19.5), Education (70, 18.2), and Research (34, 8.8). The key nursing issues of concern at the global level are workforce, the impacts of nursing in health care, professional status, and education of nurses. International alliances can help advance nursing, but the visibility of nursing in the WHO needs to be strengthened. Organizational leadership is important in order to optimize the use of nursing competence in practice and inform policy makers regarding the value of nursing to promote people's health. © 2015 Sigma Theta Tau International.

  8. The gigapixel image concept for graphic SEM documentation. Applications in archeological use-wear studies.

    Science.gov (United States)

    Vergès, Josep M; Morales, Juan I

    2014-10-01

    In this paper, we propose a specific procedure to create gigapixel-like images from SEM (scanning electron microscope) micrographs. This methodology allows intensive SEM observations to be made for those disciplines that require of large surfaces to be analyzed at different scales once the SEM sessions have been completed (e.g., stone tools use-wear studies). This is also a very useful resource for academic purposes or as a support for collaborative studies, thus reducing the number of live observation sessions and the associated expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Data management, documentation and analysis systems in radiation oncology: a multi-institutional survey

    International Nuclear Information System (INIS)

    Kessel, Kerstin A.; Combs, Stephanie E.

    2015-01-01

    Recently, information availability has become more elaborate and widespread, and treatment decisions are based on a multitude of factors. Gathering relevant data, also referred to as Big Data, is therefore critical for reaching the best patient care, and enhancing interdisciplinary and clinical research. Combining patient data from all involved systems is essential to prepare unstructured data for analyses. This demands special coordination in data management. Our study aims to characterize current developments in German-speaking hospital departments and practices. We successfully conducted the survey with the members of the Deutsche Gesellschaft für Radioonkologie (DEGRO). A questionnaire was developed consisting of 17 questions related to data management, documentation and clinical trial analyses, reflecting the clinical topics such as basic patient information, imaging, follow-up information as well as connection of documentation tools with radiooncological treatment planning machines. A total of 44 institutions completed the online survey (University hospitals n = 17, hospitals n = 13, practices/institutes n = 14). University hospitals, community hospitals and private practices are equally equipped concerning IT infrastructure for clinical use. However, private practices have a low interest in research work. All respondents stated the biggest obstacles about introducing a documentation system into their unit lie in funding and support of the central IT departments. Only 27 % (12/44) of responsible persons are specialists for documentation and data management. Our study gives an understanding of the challenges and solutions we need to be looking at for medical data storage. In the future, inter-departmental cross-links will enable the radiation oncology community to generate large-scale analyses. The online version of this article (doi:10.1186/s13014-015-0543-0) contains supplementary material, which is available to authorized users

  10. Image analysis of PV module electroluminescence

    Science.gov (United States)

    Lai, T.; Ramirez, C.; Potter, B. G.; Simmons-Potter, K.

    2017-08-01

    Electroluminescence imaging can be used as a non-invasive method to spatially assess performance degradation in photovoltaic (PV) modules. Cells, or regions of cells, that do not produce an infra-red luminescence signal under electrical excitation indicate potential damage in the module. In this study, an Andor iKon-M camera and an image acquisition tool provided by Andor have been utilized to obtain electroluminescent images of a full-sized multicrystalline PV module at regular intervals throughout an accelerated lifecycle test (ALC) performed in a large-scale environmental degradation chamber. Computer aided digital image analysis methods were then used to automate degradation assessment in the modules. Initial preprocessing of the images was designed to remove both background noise and barrel distortion in the image data. Image areas were then mapped so that changes in luminescent intensity across both individual cells and the full module could be identified. Two primary techniques for image analysis were subsequently investigated. In the first case, pixel intensity distributions were evaluated over each individual PV cell and changes to the intensities of the cells over the course of an ALC test were evaluated. In the second approach, intensity line scans of each of the cells in a PV module were performed and variations in line scan data were identified during the module ALC test. In this report, both the image acquisition and preprocessing technique and the contribution of each image analysis approach to an assessment of degradation behavior will be discussed.

  11. [Photography as analysis document, body and medicine: theory, method and criticism--the experience of Museo Nacional de Medicina Enrique Laval].

    Science.gov (United States)

    Robinson, César Leyton; Caballero, Andrés Díaz

    2007-01-01

    This article is an experimental methodological reflection on the use of medical images as useful documents for constructing the history of medicine. A method is used that is based on guidelines or analysis topics that include different ways of viewing documents, from aesthetic, technical, social and political theories to historical and medical thinking. Some exercises are also included that enhance the proposal for the reader: rediscovering the worlds in society that harbor these medical photographical archives to obtain a new theoretical approach to the construction of the history of medical science.

  12. Analysis of multi-dimensional confocal images

    International Nuclear Information System (INIS)

    Samarabandu, J.K.; Acharya, R.; Edirisinghe, C.D.; Cheng, P.C.; Lin, T.H.

    1991-01-01

    In this paper, a confocal image understanding system is developed which used the blackboard model of problem solving to achieve computerized identification and characterization of confocal fluorescent images (serial optical sections). The system is capable of identifying a large percentage of structures (e.g. cell nucleus) in the presence of background noise and non specific staining of cellular structures. The blackboard architecture provides a convenient framework within which a combination of image processing techniques can be applied to successively refine the input image. The system is organized to find the surfaces of highly visible structures first, using simple image processing techniques and then to adjust and fill in the missing areas of these object surfaces using external knowledge, and a number of more complex image processing techniques when necessary. As a result, the image analysis system is capable of obtaining morphometrical parameters such as surface area, volume and position of structures of interest automatically

  13. Analysis and classification of oncology activities on the way to workflow based single source documentation in clinical information systems.

    Science.gov (United States)

    Wagner, Stefan; Beckmann, Matthias W; Wullich, Bernd; Seggewies, Christof; Ries, Markus; Bürkle, Thomas; Prokosch, Hans-Ulrich

    2015-12-22

    Today, cancer documentation is still a tedious task involving many different information systems even within a single institution and it is rarely supported by appropriate documentation workflows. In a comprehensive 14 step analysis we compiled diagnostic and therapeutic pathways for 13 cancer entities using a mixed approach of document analysis, workflow analysis, expert interviews, workflow modelling and feedback loops. These pathways were stepwise classified and categorized to create a final set of grouped pathways and workflows including electronic documentation forms. A total of 73 workflows for the 13 entities based on 82 paper documentation forms additionally to computer based documentation systems were compiled in a 724 page document comprising 130 figures, 94 tables and 23 tumour classifications as well as 12 follow-up tables. Stepwise classification made it possible to derive grouped diagnostic and therapeutic pathways for the three major classes - solid entities with surgical therapy - solid entities with surgical and additional therapeutic activities and - non-solid entities. For these classes it was possible to deduct common documentation workflows to support workflow-guided single-source documentation. Clinical documentation activities within a Comprehensive Cancer Center can likely be realized in a set of three documentation workflows with conditional branching in a modern workflow supporting clinical information system.

  14. rBEFdata: documenting data exchange and analysis for a collaborative data management platform.

    Science.gov (United States)

    Pfaff, Claas-Thido; König-Ries, Birgitta; Lang, Anne C; Ratcliffe, Sophia; Wirth, Christian; Man, Xingxing; Nadrowski, Karin

    2015-07-01

    We are witnessing a growing gap separating primary research data from derived data products presented as knowledge in publications. Although journals today more often require the underlying data products used to derive the results as a prerequisite for a publication, the important link to the primary data is lost. However, documenting the postprocessing steps of data linking, the primary data with derived data products has the potential to increase the accuracy and the reproducibility of scientific findings significantly. Here, we introduce the rBEFdata R package as companion to the collaborative data management platform BEFdata. The R package provides programmatic access to features of the platform. It allows to search for data and integrates the search with external thesauri to improve the data discovery. It allows to download and import data and metadata into R for analysis. A batched download is available as well which works along a paper proposal mechanism implemented by BEFdata. This feature of BEFdata allows to group primary data and metadata and streamlines discussions and collaborations revolving around a certain research idea. The upload functionality of the R package in combination with the paper proposal mechanism of the portal allows to attach derived data products and scripts directly from R, thus addressing major aspects of documenting data postprocessing. We present the core features of the rBEFdata R package along an ecological analysis example and further discuss the potential of postprocessing documentation for data, linking primary data with derived data products and knowledge.

  15. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  16. Guidance document for preparing water sampling and analysis plans for UMTRA Project sites. Revision 1

    International Nuclear Information System (INIS)

    1995-09-01

    A water sampling and analysis plan (WSAP) is prepared for each Uranium Mill Tailings Remedial Action (UMTRA) Project site to provide the rationale for routine ground water sampling at disposal sites and former processing sites. The WSAP identifies and justifies the sampling locations, analytical parameters, detection limits, and sampling frequency for the routine ground water monitoring stations at each site. This guidance document has been prepared by the Technical Assistance Contractor (TAC) for the US Department of Energy (DOE). Its purpose is to provide a consistent technical approach for sampling and monitoring activities performed under the WSAP and to provide a consistent format for the WSAP documents. It is designed for use by the TAC in preparing WSAPs and by the DOE, US Nuclear Regulatory Commission, state and tribal agencies, other regulatory agencies, and the public in evaluating the content of WSAPS

  17. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim of the ...

  18. Understanding Factors Contributing to Inappropriate Critical Care: A Mixed-Methods Analysis of Medical Record Documentation.

    Science.gov (United States)

    Neville, Thanh H; Tarn, Derjung M; Yamamoto, Myrtle; Garber, Bryan J; Wenger, Neil S

    2017-11-01

    Factors leading to inappropriate critical care, that is treatment that should not be provided because it does not offer the patient meaningful benefit, have not been rigorously characterized. We explored medical record documentation about patients who received inappropriate critical care and those who received appropriate critical care to examine factors associated with the provision of inappropriate treatment. Medical records were abstracted from 123 patients who were assessed as receiving inappropriate treatment and 66 patients who were assessed as receiving appropriate treatment but died within six months of intensive care unit (ICU) admission. We used mixed methods combining qualitative analysis of medical record documentation with multivariable analysis to examine the relationship between patient and communication factors and the receipt of inappropriate treatment, and present these within a conceptual model. One academic health system. Medical records revealed 21 themes pertaining to prognosis and factors influencing treatment aggressiveness. Four themes were independently associated with patients receiving inappropriate treatment according to physicians. When decision making was not guided by physicians (odds ratio [OR] 3.76, confidence interval [95% CI] 1.21-11.70) or was delayed by patient/family (OR 4.52, 95% CI 1.69-12.04), patients were more likely to receive inappropriate treatment. Documented communication about goals of care (OR 0.29, 95% CI 0.10-0.84) and patient's preferences driving decision making (OR 0.02, 95% CI 0.00-0.27) were associated with lower odds of receiving inappropriate treatment. Medical record documentation suggests that inappropriate treatment occurs in the setting of communication and decision-making patterns that may be amenable to intervention.

  19. A Robust Actin Filaments Image Analysis Framework.

    Directory of Open Access Journals (Sweden)

    Mitchel Alioscha-Perez

    2016-08-01

    Full Text Available The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale. Based on this observation, we propose a three-steps actin filaments extraction methodology: (i first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in

  20. Machine learning applications in cell image analysis.

    Science.gov (United States)

    Kan, Andrey

    2017-07-01

    Machine learning (ML) refers to a set of automatic pattern recognition methods that have been successfully applied across various problem domains, including biomedical image analysis. This review focuses on ML applications for image analysis in light microscopy experiments with typical tasks of segmenting and tracking individual cells, and modelling of reconstructed lineage trees. After describing a typical image analysis pipeline and highlighting challenges of automatic analysis (for example, variability in cell morphology, tracking in presence of clutters) this review gives a brief historical outlook of ML, followed by basic concepts and definitions required for understanding examples. This article then presents several example applications at various image processing stages, including the use of supervised learning methods for improving cell segmentation, and the application of active learning for tracking. The review concludes with remarks on parameter setting and future directions.

  1. Optimization of shearography image quality analysis

    International Nuclear Information System (INIS)

    Rafhayudi Jamro

    2005-01-01

    Shearography is an optical technique based on speckle pattern to measure the deformation of the object surface in which the fringe pattern is obtained through the correlation analysis from the speckle pattern. Analysis of fringe pattern for engineering application is limited for qualitative measurement. Therefore, for further analysis that lead to qualitative data, series of image processing mechanism are involved. In this paper, the fringe pattern for qualitative analysis is discussed. In principal field of applications is qualitative non-destructive testing such as detecting discontinuity, defect in the material structure, locating fatigue zones and etc and all these required image processing application. In order to performed image optimisation successfully, the noise in the fringe pattern must be minimised and the fringe pattern itself must be maximise. This can be achieved by applying a filtering method with a kernel size ranging from 2 X 2 to 7 X 7 pixels size and also applying equalizer in the image processing. (Author)

  2. Facial Image Analysis in Anthropology: A Review

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 49, č. 2 (2011), s. 141-153 ISSN 0323-1119 Institutional support: RVO:67985807 Keywords : face * computer-assisted methods * template matching * geometric morphopetrics * robust image analysis Subject RIV: IN - Informatics, Computer Science

  3. Structural analysis in medical imaging

    International Nuclear Information System (INIS)

    Dellepiane, S.; Serpico, S.B.; Venzano, L.; Vernazza, G.

    1987-01-01

    The conventional techniques in Pattern Recognition (PR) have been greatly improved by the introduction of Artificial Intelligence (AI) approaches, in particular for knowledge representation, inference mechanism and control structure. The purpose of this paper is to describe an image understanding system, based on the integrated approach (AI - PR), developed in the author's Department to interpret Nuclear Magnetic Resonance (NMR) images. The system is characterized by a heterarchical control structure and a blackboard model for the global data-base. The major aspects of the system are pointed out, with particular reference to segmentation, knowledge representation and error recovery (backtracking). The eye slices obtained in the case of two patients have been analyzed and the related results are discussed

  4. Corporate social responsibility and access to policy élites: an analysis of tobacco industry documents.

    Science.gov (United States)

    Fooks, Gary J; Gilmore, Anna B; Smith, Katherine E; Collin, Jeff; Holden, Chris; Lee, Kelley

    2011-08-01

    Recent attempts by large tobacco companies to represent themselves as socially responsible have been widely dismissed as image management. Existing research supports such claims by pointing to the failings and misleading nature of corporate social responsibility (CSR) initiatives. However, few studies have focused in depth on what tobacco companies hoped to achieve through CSR or reflected on the extent to which these ambitions have been realised. Iterative searching relating to CSR strategies was undertaken of internal British American Tobacco (BAT) documents, released through litigation in the US. Relevant documents (764) were indexed and qualitatively analysed. In the past decade, BAT has actively developed a wide-ranging CSR programme. Company documents indicate that one of the key aims of this programme was to help the company secure access to policymakers and, thereby, increase the company's chances of influencing policy decisions. Taking the UK as a case study, this paper demonstrates the way in which CSR can be used to renew and maintain dialogue with policymakers, even in ostensibly unreceptive political contexts. In practice, the impact of this political use of CSR is likely to be context specific; depending on factors such as policy élites' understanding of the credibility of companies as a reliable source of information. The findings suggest that tobacco company CSR strategies can enable access to and dialogue with policymakers and provide opportunities for issue definition. CSR should therefore be seen as a form of corporate political activity. This underlines the need for broad implementation of Article 5.3 of the Framework Convention on Tobacco Control. Measures are needed to ensure transparency of interactions between all parts of government and the tobacco industry and for policy makers to be made more aware of what companies hope to achieve through CSR.

  5. Corporate social responsibility and access to policy élites: an analysis of tobacco industry documents.

    Directory of Open Access Journals (Sweden)

    Gary J Fooks

    2011-08-01

    Full Text Available Recent attempts by large tobacco companies to represent themselves as socially responsible have been widely dismissed as image management. Existing research supports such claims by pointing to the failings and misleading nature of corporate social responsibility (CSR initiatives. However, few studies have focused in depth on what tobacco companies hoped to achieve through CSR or reflected on the extent to which these ambitions have been realised.Iterative searching relating to CSR strategies was undertaken of internal British American Tobacco (BAT documents, released through litigation in the US. Relevant documents (764 were indexed and qualitatively analysed. In the past decade, BAT has actively developed a wide-ranging CSR programme. Company documents indicate that one of the key aims of this programme was to help the company secure access to policymakers and, thereby, increase the company's chances of influencing policy decisions. Taking the UK as a case study, this paper demonstrates the way in which CSR can be used to renew and maintain dialogue with policymakers, even in ostensibly unreceptive political contexts. In practice, the impact of this political use of CSR is likely to be context specific; depending on factors such as policy élites' understanding of the credibility of companies as a reliable source of information.The findings suggest that tobacco company CSR strategies can enable access to and dialogue with policymakers and provide opportunities for issue definition. CSR should therefore be seen as a form of corporate political activity. This underlines the need for broad implementation of Article 5.3 of the Framework Convention on Tobacco Control. Measures are needed to ensure transparency of interactions between all parts of government and the tobacco industry and for policy makers to be made more aware of what companies hope to achieve through CSR.

  6. Corporate Social Responsibility and Access to Policy Élites: An Analysis of Tobacco Industry Documents

    Science.gov (United States)

    Fooks, Gary J.; Gilmore, Anna B.; Smith, Katherine E.; Collin, Jeff; Holden, Chris; Lee, Kelley

    2011-01-01

    Background Recent attempts by large tobacco companies to represent themselves as socially responsible have been widely dismissed as image management. Existing research supports such claims by pointing to the failings and misleading nature of corporate social responsibility (CSR) initiatives. However, few studies have focused in depth on what tobacco companies hoped to achieve through CSR or reflected on the extent to which these ambitions have been realised. Methods and Findings Iterative searching relating to CSR strategies was undertaken of internal British American Tobacco (BAT) documents, released through litigation in the US. Relevant documents (764) were indexed and qualitatively analysed. In the past decade, BAT has actively developed a wide-ranging CSR programme. Company documents indicate that one of the key aims of this programme was to help the company secure access to policymakers and, thereby, increase the company's chances of influencing policy decisions. Taking the UK as a case study, this paper demonstrates the way in which CSR can be used to renew and maintain dialogue with policymakers, even in ostensibly unreceptive political contexts. In practice, the impact of this political use of CSR is likely to be context specific; depending on factors such as policy élites' understanding of the credibility of companies as a reliable source of information. Conclusions The findings suggest that tobacco company CSR strategies can enable access to and dialogue with policymakers and provide opportunities for issue definition. CSR should therefore be seen as a form of corporate political activity. This underlines the need for broad implementation of Article 5.3 of the Framework Convention on Tobacco Control. Measures are needed to ensure transparency of interactions between all parts of government and the tobacco industry and for policy makers to be made more aware of what companies hope to achieve through CSR. Please see later in the article for the Editors

  7. Term Analysis – Improving the Quality of Learning and Application Documents in Engineering Design

    Directory of Open Access Journals (Sweden)

    S. Weiss

    2006-01-01

    Full Text Available Conceptual homogeneity is one determinant of the quality of text documents. A concept remains the same if the words used (termini change [1, 2]. In other words, termini can vary while the concept retains the same meaning. Human beings are able to handle concepts and termini because of their semantic network, which is able to connect termini to the actual context and thus identify the adequate meaning of the termini. Problems could arise when humans have to learn new content and correspondingly new concepts. Since the content is basically imparted by text via particular termini, it is a challenge to establish the right concept from the text with the termini. A term might be known, but have a different meaning [3, 4]. Therefore, it is very important to build up the correct understanding of concepts within a text. This is only possible when concepts are explained by the right termini, within an adequate context, and above all, homogeneously. So, when setting up or using text documents for teaching or application, it is essential to provide concept homogeneity.Understandably, the quality of documents is, ceteris paribus, reciprocally proportional to variations of termini. Therefore, an analysis of variations of termini could form a basis for specific improvement of conceptual homogeneity.Consequently, an exposition of variations of termini as control and improvement parameters is carried out in this investigation. This paper describes the functionality and the profit of a tool called TermAnalysis.It also outlines the margins, typeface and other vital specifications necessary for authors preparing camera-ready papers for submission to the 5th International Conference on Advanced Engineering Design. The aim of this paper is to ensure that all readers are clear as to the uniformity required by the organizing committee and to ensure that readers’ papers will be accepted as camera-ready for the conference.TermAnalysis is a software tool developed

  8. Malware Analysis Using Visualized Image Matrices

    Directory of Open Access Journals (Sweden)

    KyoungSoo Han

    2014-01-01

    Full Text Available This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  9. Application programming interface document for the modernized Transient Reactor Analysis Code (TRAC-M)

    International Nuclear Information System (INIS)

    Mahaffy, J.; Boyack, B.E.; Steinke, R.G.

    1998-05-01

    The objective of this document is to ease the task of adding new system components to the Transient Reactor Analysis Code (TRAC) or altering old ones. Sufficient information is provided to permit replacement or modification of physical models and correlations. Within TRAC, information is passed at two levels. At the upper level, information is passed by system-wide and component-specific data modules at and above the level of component subroutines. At the lower level, information is passed through a combination of module-based data structures and argument lists. This document describes the basic mechanics involved in the flow of information within the code. The discussion of interfaces in the body of this document has been kept to a general level to highlight key considerations. The appendices cover instructions for obtaining a detailed list of variables used to communicate in each subprogram, definitions and locations of key variables, and proposed improvements to intercomponent interfaces that are not available in the first level of code modernization

  10. Internationalization Impact on PhD Training Policy in Russia: Insights from The Comparative Document Analysis

    Directory of Open Access Journals (Sweden)

    Oksana Chigisheva

    2017-09-01

    Full Text Available The relevance of the study is due to the need for an objective picture of the Russian third level tertiary education transformation driven by internationalization issues and global trends in education. The article provides an analytical comparative review of the official documents related to the main phases of education reform in Russia and focuses on the system of PhD training which has undergone significant reorganization in recent years. A series of alterations introduced into the theory and practice of postgraduate education in Russia are traced in regulatory documents and interpreted in terms of growing internationalization demand. Possible implications for further development of the research human potential in Russia are being discussed. The method of comparative document analysis produces the best possible insight into the subject. The findings of the study contribute to the understanding of current challenges facing the system of doctoral studies in Russia and lead to certain conclusions on the transformation of educational policy in relation to PhD training under the influence of internationalization agenda.

  11. Obtaining informed consent for genomics research in Africa: analysis of H3Africa consent documents.

    Science.gov (United States)

    Munung, Nchangwi Syntia; Marshall, Patricia; Campbell, Megan; Littler, Katherine; Masiye, Francis; Ouwe-Missi-Oukem-Boyer, Odile; Seeley, Janet; Stein, D J; Tindana, Paulina; de Vries, Jantina

    2016-02-01

    The rise in genomic and biobanking research worldwide has led to the development of different informed consent models for use in such research. This study analyses consent documents used by investigators in the H3Africa (Human Heredity and Health in Africa) Consortium. A qualitative method for text analysis was used to analyse consent documents used in the collection of samples and data in H3Africa projects. Thematic domains included type of consent model, explanations of genetics/genomics, data sharing and feedback of test results. Informed consent documents for 13 of the 19 H3Africa projects were analysed. Seven projects used broad consent, five projects used tiered consent and one used specific consent. Genetics was mostly explained in terms of inherited characteristics, heredity and health, genes and disease causation, or disease susceptibility. Only one project made provisions for the feedback of individual genetic results. H3Africa research makes use of three consent models-specific, tiered and broad consent. We outlined different strategies used by H3Africa investigators to explain concepts in genomics to potential research participants. To further ensure that the decision to participate in genomic research is informed and meaningful, we recommend that innovative approaches to the informed consent process be developed, preferably in consultation with research participants, research ethics committees and researchers in Africa. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  13. Topological image texture analysis for quality assessment

    Science.gov (United States)

    Asaad, Aras T.; Rashid, Rasber Dh.; Jassim, Sabah A.

    2017-05-01

    Image quality is a major factor influencing pattern recognition accuracy and help detect image tampering for forensics. We are concerned with investigating topological image texture analysis techniques to assess different type of degradation. We use Local Binary Pattern (LBP) as a texture feature descriptor. For any image construct simplicial complexes for selected groups of uniform LBP bins and calculate persistent homology invariants (e.g. number of connected components). We investigated image quality discriminating characteristics of these simplicial complexes by computing these models for a large dataset of face images that are affected by the presence of shadows as a result of variation in illumination conditions. Our tests demonstrate that for specific uniform LBP patterns, the number of connected component not only distinguish between different levels of shadow effects but also help detect the infected regions as well.

  14. New insights in forensic chemistry: NIR/Chemometrics analysis of toners for questioned documents examination.

    Science.gov (United States)

    Materazzi, Stefano; Risoluti, Roberta; Pinci, Sara; Saverio Romolo, Francesco

    2017-11-01

    Near-Infrared spectroscopy (NIRs) coupled to chemometrics was investigated for the first time as a new tool for the analysis of black toners to evaluate its application in forensic cases. Ten black toners from four manufacturers were included in this study and the acquired spectra were compared in order to differentiate toners. Multivariate statistical analysis based on Principal Component Analysis (PCA) was considered to develop a model of comparison of toners in questioned documents. Results demonstrated the capabilities of the approach NIR/Chemometrics to correctly identify toners when printed on different papers and to be not affected by the printing process. This study has shown that NIRs can be considered as a useful, fast, non-destructive tool providing the characterisation of toners in forensic caseworks. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Analysis of the recent international documents toward inclusive education of children with disabilities

    Directory of Open Access Journals (Sweden)

    Tabatabaie Minou

    2012-01-01

    Full Text Available Analysis of various international documents clearly suggests that international documents have provided a significantmotivation to efforts undertaken at the national level about education of children with disabilities. UN Convention on theRights of the Child imposed a requirement for radical changes to traditional approaches to provision made for children withdisabilities. One year later, the 1990 World Conference on Education for all focused attention on a much broader range ofchildren with disabilities who may be excluded from or marginalized within education systems. Its development has involveda series of stages during which education systems have explored different ways of responding to children with disabilities andothers who experience difficulties in learning. This conference declared the inclusive education is regarded as the only meansto achieve the goal of "Education for All". This trend was reaffirmed by next international documents. And finally, accordingto the article 24 of the Convention on the rights of persons with disabilities, disabled persons should be able to accessgeneral tertiary education, vocational training, adult education and lifelong learning without discrimination and on an equalbasis with others through reasonable accommodation of their disabilities. All of these documents played an important role inbringing the attention on to children with disabilities, especially on education as a vehicle for integration and empowerment.This research examines the new international trends occurring regarding the education of children with disabilities and finallyresults that the new trends show a movement from special education to inclusive education and moving from seclusion toinclusion and provide that solutions must focus on prevention, cure and steps to make these children as normal as possible.In this regard, States must ensure the full realization of all human rights and fundamental freedoms for all disabled people,on an

  16. ITER final design report, cost review and safety analysis (FDR) and relevant documents

    International Nuclear Information System (INIS)

    1999-01-01

    This volume contains the fourth major milestone report and documents associated with its acceptance, review and approval. This ITER Final Design Report, Cost Review and Safety Analysis was presented to the ITER Council at its 13th meeting in February 1998 and was approved at its extraordinary meeting on 25 June 1998. The contents include an outline of the ITER objectives, the ITER parameters and design overview as well as operating scenarios and plasma performance. Furthermore, design features, safety and environmental characteristics and schedule and cost estimates are given

  17. Detecting the Spur Marks of Ink-Jet Printed Documents Using a Multiband Scanner in NIR Mode and Image Restoration

    Science.gov (United States)

    Furukawa, Takeshi

    Ink-jet printers are frequently used in crime such as counterfeiting bank notes, driving licenses, and identification cards. Police investigators required us to identify makers or brands of ink-jet printers from counterfeits. In such demands, classifying ink-jet printers according to spur marks which were made by spur gears located in front of print heads for paper feed has been addressed by document examiners. However, spur marks are significantly faint so that it is difficult to detect them. In this study, we propose the new method for detecting spur marks using a multiband scanner in near infrared (NIR) mode and estimations of point spread function (PSF). As estimating PSF we used cepstrum which is inverse Fourier transform of logarithm spectrum. The proposed method provided the clear image of the spur marks.

  18. High-speed cross-sectional imaging of valuable documents using common-path swept-source optical coherence tomography.

    Science.gov (United States)

    Fujiwara, Kazuo; Matoba, Osamu

    2011-12-01

    A common-path swept-source optical coherence tomography (SS-OCT) is a promising scheme for implementing a high-speed and stable OCT system. We investigate the capability of a common-path SS-OCT system to perform the cross-sectional imaging of valuable documents translated at high speed for the check of its security feature. The influence of transport speeds, up to 2000 mm/s, on the depth resolution and the signal intensity is experimentally evaluated using a SS-OCT system equipped with a swept source at a center wavelength of 1335 nm and with a sweep repetition rate of 50 kHz. The degradation of the measured signal is in good agreement with theory. © 2011 Optical Society of America

  19. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  20. Adversarial Stain Transfer for Histopathology Image Analysis.

    Science.gov (United States)

    Bentaieb, Aicha; Hamarneh, Ghassan

    2018-03-01

    It is generally recognized that color information is central to the automatic and visual analysis of histopathology tissue slides. In practice, pathologists rely on color, which reflects the presence of specific tissue components, to establish a diagnosis. Similarly, automatic histopathology image analysis algorithms rely on color or intensity measures to extract tissue features. With the increasing access to digitized histopathology images, color variation and its implications have become a critical issue. These variations are the result of not only a variety of factors involved in the preparation of tissue slides but also in the digitization process itself. Consequently, different strategies have been proposed to alleviate stain-related tissue inconsistencies in automatic image analysis systems. Such techniques generally rely on collecting color statistics to perform color matching across images. In this work, we propose a different approach for stain normalization that we refer to as stain transfer. We design a discriminative image analysis model equipped with a stain normalization component that transfers stains across datasets. Our model comprises a generative network that learns data set-specific staining properties and image-specific color transformations as well as a task-specific network (e.g., classifier or segmentation network). The model is trained end-to-end using a multi-objective cost function. We evaluate the proposed approach in the context of automatic histopathology image analysis on three data sets and two different analysis tasks: tissue segmentation and classification. The proposed method achieves superior results in terms of accuracy and quality of normalized images compared to various baselines.

  1. From Documentation Images to Restauration Support Tools: a Path Following the Neptune Fountain in Bologna Design Process

    Science.gov (United States)

    Apollonio, F. I.; Ballabeni, M.; Bertacchi, S.; Fallavollita, F.; Foschi, R.; Gaiani, M.

    2017-05-01

    The sixteenth-century Fountain of Neptune is one of Bologna's most renowned landmarks. During the recent restoration activities of the monumental sculpture group, consisting in precious marbles and highly refined bronzes with water jets, a photographic campaign has been carried out exclusively for documentation purposes of the current state of preservation of the complex. Nevertheless, the highquality imagery was used for a different use, namely to create a 3D digital model accurate in shape and color by means of automated photogrammetric techniques and a robust customized pipeline. This 3D model was used as basic tool to support many and different activities of the restoration site. The paper describes the 3D model construction technique used and the most important applications in which it was used as support tool for restoration: (i) reliable documentation of the actual state; (ii) surface cleaning analysis; (iii) new water system and jets; (iv) new lighting design simulation; (v) support for preliminary analysis and projectual studies related to hardly accessible areas; (vi) structural analysis; (vii) base for filling gaps or missing elements through 3D printing; (viii) high-quality visualization and rendering and (ix) support for data modelling and semantic-based diagrams.

  2. FROM DOCUMENTATION IMAGES TO RESTAURATION SUPPORT TOOLS: A PATH FOLLOWING THE NEPTUNE FOUNTAIN IN BOLOGNA DESIGN PROCESS

    Directory of Open Access Journals (Sweden)

    F. I. Apollonio

    2017-05-01

    Full Text Available The sixteenth-century Fountain of Neptune is one of Bologna’s most renowned landmarks. During the recent restoration activities of the monumental sculpture group, consisting in precious marbles and highly refined bronzes with water jets, a photographic campaign has been carried out exclusively for documentation purposes of the current state of preservation of the complex. Nevertheless, the highquality imagery was used for a different use, namely to create a 3D digital model accurate in shape and color by means of automated photogrammetric techniques and a robust customized pipeline. This 3D model was used as basic tool to support many and different activities of the restoration site. The paper describes the 3D model construction technique used and the most important applications in which it was used as support tool for restoration: (i reliable documentation of the actual state; (ii surface cleaning analysis; (iii new water system and jets; (iv new lighting design simulation; (v support for preliminary analysis and projectual studies related to hardly accessible areas; (vi structural analysis; (vii base for filling gaps or missing elements through 3D printing; (viii high-quality visualization and rendering and (ix support for data modelling and semantic-based diagrams.

  3. Multispectral dual isotope and NMR image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vannier, M.W.; Beihn, R.M.; Butterfield, R.L.; De Land, F.H.

    1985-05-01

    Dual isotope scintigraphy and nuclear magnetic resonance imaging produce image data that is intrinsically multispectral. That is multiple images of the same anatomic region are generated with different gray scale distribution and morphologic content that is largely redundant. Image processing technology, originally developed by NASA for satellite imaging, is available for multispectral analysis. These methods have been applied to provide tissue characterization. Tissue specific information encoded in the grapy scale data from dual isotope and NMR studies may be extracted using multispectral pattern recognition methods. The authors used table lookup minimum distance, maximum likelihood and cluster analysis techniques with data sets from Ga-67 / Tc-99m, 1-131 labeled antibodies / Tc-99m, Tc-99m perfusion / Xe-133 ventilation, and NMR studies. The results show; tissue characteristic signatures exist in dual isotope and NMR imaging, and these spectral signatures are identifiable using multispectral image analysis and provide tissue classification maps with scatter diagrams that facilitate interpretation and assist in elucidating subtle changes.

  4. Data development technical support document for the aircraft crash risk analysis methodology (ACRAM) standard

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, C.Y.; Glaser, R.E.; Mensing, R.W.; Lin, T.; Haley, T.A.; Barto, A.B.; Stutzke, M.A.

    1996-08-01

    The Aircraft Crash Risk Analysis Methodology (ACRAM) Panel has been formed by the US Department of Energy Office of Defense Programs (DOE/DP) for the purpose of developing a standard methodology for determining the risk from aircraft crashes onto DOE ground facilities. In order to accomplish this goal, the ACRAM panel has been divided into four teams, the data development team, the model evaluation team, the structural analysis team, and the consequence team. Each team, consisting of at least one member of the ACRAM plus additional DOE and DOE contractor personnel, specializes in the development of the methodology assigned to that team. This report documents the work performed by the data development team and provides the technical basis for the data used by the ACRAM Standard for determining the aircraft crash frequency. This report should be used to provide the generic data needed to calculate the aircraft crash frequency into the facility under consideration as part of the process for determining the aircraft crash risk to ground facilities as given by the DOE Standard Aircraft Crash Risk Assessment Methodology (ACRAM). Some broad guidance is presented on how to obtain the needed site-specific and facility specific data but this data is not provided by this document.

  5. Final safety analysis report for the Galileo Mission: Volume 1, Reference design document

    Energy Technology Data Exchange (ETDEWEB)

    1988-05-01

    The Galileo mission uses nuclear power sources called Radioisotope Thermoelectric Generators (RTGs) to provide the spacecraft's primary electrical power. Because these generators contain nuclear material, a Safety Analysis Report (SAR) is required. A preliminary SAR and an updated SAR were previously issued that provided an evolving status report on the safety analysis. As a result of the Challenger accident, the launch dates for both Galileo and Ulysses missions were later rescheduled for November 1989 and October 1990, respectively. The decision was made by agreement between the DOE and the NASA to have a revised safety evaluation and report (FSAR) prepared on the basis of these revised vehicle accidents and environments. The results of this latest revised safety evaluation are presented in this document (Galileo FSAR). Volume I, this document, provides the background design information required to understand the analyses presented in Volumes II and III. It contains descriptions of the RTGs, the Galileo spacecraft, the Space Shuttle, the Inertial Upper Stage (IUS), the trajectory and flight characteristics including flight contingency modes, and the launch site. There are two appendices in Volume I which provide detailed material properties for the RTG.

  6. Data development technical support document for the aircraft crash risk analysis methodology (ACRAM) standard

    International Nuclear Information System (INIS)

    Kimura, C.Y.; Glaser, R.E.; Mensing, R.W.; Lin, T.; Haley, T.A.; Barto, A.B.; Stutzke, M.A.

    1996-01-01

    The Aircraft Crash Risk Analysis Methodology (ACRAM) Panel has been formed by the US Department of Energy Office of Defense Programs (DOE/DP) for the purpose of developing a standard methodology for determining the risk from aircraft crashes onto DOE ground facilities. In order to accomplish this goal, the ACRAM panel has been divided into four teams, the data development team, the model evaluation team, the structural analysis team, and the consequence team. Each team, consisting of at least one member of the ACRAM plus additional DOE and DOE contractor personnel, specializes in the development of the methodology assigned to that team. This report documents the work performed by the data development team and provides the technical basis for the data used by the ACRAM Standard for determining the aircraft crash frequency. This report should be used to provide the generic data needed to calculate the aircraft crash frequency into the facility under consideration as part of the process for determining the aircraft crash risk to ground facilities as given by the DOE Standard Aircraft Crash Risk Assessment Methodology (ACRAM). Some broad guidance is presented on how to obtain the needed site-specific and facility specific data but this data is not provided by this document

  7. Deep Learning in Medical Image Analysis

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2016-01-01

    The computer-assisted analysis for better interpreting images have been longstanding issues in the medical imaging field. On the image-understanding front, recent advances in machine learning, especially, in the way of deep learning, have made a big leap to help identify, classify, and quantify patterns in medical images. Specifically, exploiting hierarchical feature representations learned solely from data, instead of handcrafted features mostly designed based on domain-specific knowledge, lies at the core of the advances. In that way, deep learning is rapidly proving to be the state-of-the-art foundation, achieving enhanced performances in various medical applications. In this article, we introduce the fundamentals of deep learning methods; review their successes to image registration, anatomical/cell structures detection, tissue segmentation, computer-aided disease diagnosis or prognosis, and so on. We conclude by raising research issues and suggesting future directions for further improvements. PMID:28301734

  8. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  9. Digital image analysis of NDT radiographs

    International Nuclear Information System (INIS)

    Graeme, W.A. Jr.; Eizember, A.C.; Douglass, J.

    1989-01-01

    Prior to the introduction of Charge Coupled Device (CCD) detectors the majority of image analysis performed on NDT radiographic images was done visually in the analog domain. While some film digitization was being performed, the process was often unable to capture all the usable information on the radiograph or was too time consuming. CCD technology now provides a method to digitize radiographic film images without losing the useful information captured in the original radiograph in a timely process. Incorporating that technology into a complete digital radiographic workstation allows analog radiographic information to be processed, providing additional information to the radiographer. Once in the digital domain, that data can be stored, and fused with radioscopic and other forms of digital data. The result is more productive analysis and management of radiographic inspection data. The principal function of the NDT Scan IV digital radiography system is the digitization, enhancement and storage of radiographic images

  10. Development of Image Analysis Software of MAXI

    Science.gov (United States)

    Eguchi, S.; Ueda, Y.; Hiroi, K.; Isobe, N.; Sugizaki, M.; Suzuki, M.; Tomida, H.; Maxi Team

    2010-12-01

    Monitor of All-sky X-ray Image (MAXI) is an X-ray all-sky monitor, attached to the Japanese experiment module Kibo on the International Space Station. The main scientific goals of the MAXI mission include the discovery of X-ray novae followed by prompt alerts to the community (Negoro et al., in this conference), and production of X-ray all-sky maps and new source catalogs with unprecedented sensitivities. To extract the best capabilities of the MAXI mission, we are working on the development of detailed image analysis tools. We utilize maximum likelihood fitting to a projected sky image, where we take account of the complicated detector responses, such as the background and point spread functions (PSFs). The modeling of PSFs, which strongly depend on the orbit and attitude of MAXI, is a key element in the image analysis. In this paper, we present the status of our software development.

  11. Design Criteria For Networked Image Analysis System

    Science.gov (United States)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  12. Multilocus genetic analysis of brain images

    Directory of Open Access Journals (Sweden)

    Derrek Paul Hibar

    2011-10-01

    Full Text Available The quest to identify genes that influence disease is now being extended to find genes that affect biological markers of disease, or endophenotypes. Brain images, in particular, provide exquisitely detailed measures of anatomy, function, and connectivity in the living human brain, and have identified characteristic features of psychiatric and neurological disorders. The emerging field of imaging genomics is discovering important genetic variants associated with brain structure and function, which in turn influence disease risk and fundamental cognitive processes. Statistical approaches for testing genetic associations are not straightforward to apply to brain images because brain imaging phenotypes are generally high dimensional and spatially complex. Neuroimaging phenotypes comprise three dimensional maps across many points in the brain, fiber tracts, shape-based analysis, and connectivity matrices, or networks. These complex data types require new methods for data reduction and joint consideration of the image and the genome. Image-wide, genome-wide searches are now feasible, but they can be greatly empowered by sparse regression or hierarchical clustering methods that isolate promising features, boosting statistical power. Here we review the evolution of statistical approaches to assess genetic influences on the brain. We outline the current state of multivariate statistics in imaging genomics, and future directions, including meta-analysis. We emphasize the power of novel multivariate approaches to discover reliable genetic influences with small effect sizes.

  13. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  14. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    National Research Council Canada - National Science Library

    Liang, Jianming

    2001-01-01

    Dynamic Chest Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence...

  15. Document Image Understanding - 1997

    National Research Council Canada - National Science Library

    Doermann, David S

    1998-01-01

    .... Each reference is classified by major topic. Areas covered include, but are not limited to, preprocessing, models and representations, on-line recognition, off-line recognition, graphics recognition and interpretation, page processing, post...

  16. Aneuploid polyclonality in image analysis.

    Science.gov (United States)

    Alderisio, M; Ribotta, G; Giarnieri, E; Midulla, C; Ferranti, S; Narilli, P; Nofroni, I; Vecchione, A

    1996-05-01

    Solid tumors such as colorectal adenocarcinomas consist of biologically diverse cell subpopulations. Nuclear DNA content of tumor cells in colorectal carcinomas may be studied with different techniques of intranuclear DNA quantification. In the current study, the DNA ploidy of samples obtained from 68 patients with colorectal carcinoma (age ranging from 46 to 86 years, mean age 66 years), treated with radical surgery, between the years 1992 and 1995 was analyzed. DNA ploidy was assessed using a CAS 200 image analyzer and was evaluated on neoplastic tissue and undamaged healthy mucosa obtained from the edges of the surgical resection. Approximately 150-300 cells were analyzed for each sample. The aim of this study was to evaluate the prognostic significance of the polyclonal cases correlated with lymph node infiltration and disease free-survival. The pathological stage according to the TNM classification was compared to ploidy: an increase in multiple stemlines was observed in stage III cases, i.e., a progression towards aneuploidy and multiple stemlines was significantly associated with lymphatic metastasis (p<0.0003). Concerning distant metastasis, we found a correlation between stage IV and polyclonality. A significant correlation was observed between disease-free survival and aneuploid and polyclonal cases (p<0.0053). In polyclonal cases a nine fold greater relapse risk compared to the non-polyclonal cases was observed (p<0.0004). In two cases, the adeno-carcinoma of the sigma was polyclonal and its hepatic metastasis contained the predominant aneuploid clone with the same cytometric characteristics (DNA index) of the original lesion.

  17. How do hospitals deal with euthanasia requests in Flanders (Belgium)? A content analysis of policy documents.

    Science.gov (United States)

    Lemiengre, Joke; Dierckx de Casterlé, Bernadette; Denier, Yvonne; Schotsmans, Paul; Gastmans, Chris

    2008-05-01

    To describe the form and content of ethics policies on euthanasia in Flemish hospitals and the possible influence of religious affiliation on policy content. Content analysis of policy documents. Forty-two documents were analyzed. All policies contained procedures; 57% included the position paper on which the hospital's stance on euthanasia was based. All policies described their hospital's stance on euthanasia in competent terminally ill patients (n=42); 10 and 4 policies, respectively, did not describe their stance in incompetent terminally and non-terminally ill patients. Catholic hospitals restrictively applied the euthanasia law with palliative procedures and interdisciplinary deliberations. The policies described several phases of the euthanasia care process--confrontation with euthanasia request (93%), decision-making process (95%), care process in cases of no-euthanasia decision (38%), preparation and performance of euthanasia (79%), and aftercare (81%)--as well as involvement of caregivers, patients, and relatives; ethical issues; support for caregivers; reporting; and practical examples of professional attitudes and communication skills. Euthanasia policies go beyond summarizing the euthanasia law by addressing the importance of the euthanasia care process, in which palliative care and interdisciplinary cooperation are important factors. Euthanasia policies provide tangible guidance for physicians and nurses on handling euthanasia requests.

  18. Applying a sociolinguistic model to the analysis of informed consent documents.

    Science.gov (United States)

    Granero-Molina, José; Fernández-Sola, Cayetano; Aguilera-Manrique, Gabriel

    2009-11-01

    Information on the risks and benefits related to surgical procedures is essential for patients in order to obtain their informed consent. Some disciplines, such as sociolinguistics, offer insights that are helpful for patient-professional communication in both written and oral consent. Communication difficulties become more acute when patients make decisions through an informed consent document because they may sign this with a lack of understanding and information, and consequently feel deprived of their freedom to make their choice about different treatments or surgery. This article discusses findings from documentary analysis using the sociolinguistic SPEAKING model, which was applied to the general and specific informed consent documents required for laparoscopic surgery of the bile duct at Torrecárdenas Hospital, Almería, Spain. The objective of this procedure was to identify flaws when information was provided, together with its readability, its voluntary basis, and patients' consent. The results suggest potential linguistic communication difficulties, different languages being used, cultural clashes, asymmetry of communication between professionals and patients, assignment of rights on the part of patients, and overprotection of professionals and institutions.

  19. Reading the Music and Understanding the Therapeutic Process: Documentation, Analysis and Interpretation of Improvisational Music Therapy

    Directory of Open Access Journals (Sweden)

    Deborah Parker

    2011-01-01

    Full Text Available This article is concerned primarily with the challenges of presenting clinical material from improvisational music therapy. My aim is to propose a model for the transcription of music therapy material, or “musicotherapeutic objects” (comparable to Bion’s “psychoanalytic objects”, which preserves the integrated “gestalt” of the musical experience as far as possible, whilst also supporting detailed analysis and interpretation. Unwilling to resort to use of visual documentation, but aware that many important indicators in music therapy are non-sounding, I propose a richly annotated score, where traditional music notation is integrated with graphic and verbal additions, in order to document non-sounding events. This model is illustrated within the context of a clinical case with a high functioning autistic woman. The four transcriptions, together with the original audio tracks, present significant moments during the course of music therapy, attesting to the development of the dyadic relationship, with reference to John Bowlby’s concept of a “secure base” as the most appropriate dynamic environment for therapy.

  20. Evaluation of Rapid Stain IDentification (RSID™ Reader System for Analysis and Documentation of RSID™ Tests

    Directory of Open Access Journals (Sweden)

    Pravatchai W. Boonlayangoor

    2013-08-01

    Full Text Available The ability to detect the presence of body fluids is a crucial first step in documenting and processing forensic evidence. The Rapid Stain IDentification (RSID™ tests for blood, saliva, semen and urine are lateral flow immunochromatographic strip tests specifically designed for forensic use. Like most lateral flow strips, the membrane components of the test are enclosed in a molded plastic cassette with a sample well and an observation window. No specialized equipment is required to use these tests or to score the results seen in the observation window; however, the utility of these tests can be enhanced if an electronic record of the test results can be obtained, preferably by a small hand-held device that could be used in the field under low light conditions. Such a device should also be able to “read” the lateral flow strips and accurately record the results of the test as either positive, i.e., the body fluid was detected, or negative, i.e., the body fluid was not detected. Here we describe the RSID™ Reader System—a ruggedized strip test reader unit that allows analysis and documentation of RSID™ lateral flow strip tests using pre-configured settings, and show that the RSID™ Reader can accurately and reproducibly report and record correct results from RSID™ blood, saliva, semen, and urine tests.

  1. Automated regional behavioral analysis for human brain images.

    Science.gov (United States)

    Lancaster, Jack L; Laird, Angela R; Eickhoff, Simon B; Martinez, Michael J; Fox, P Mickle; Fox, Peter T

    2012-01-01

    Behavioral categories of functional imaging experiments along with standardized brain coordinates of associated activations were used to develop a method to automate regional behavioral analysis of human brain images. Behavioral and coordinate data were taken from the BrainMap database (http://www.brainmap.org/), which documents over 20 years of published functional brain imaging studies. A brain region of interest (ROI) for behavioral analysis can be defined in functional images, anatomical images or brain atlases, if images are spatially normalized to MNI or Talairach standards. Results of behavioral analysis are presented for each of BrainMap's 51 behavioral sub-domains spanning five behavioral domains (Action, Cognition, Emotion, Interoception, and Perception). For each behavioral sub-domain the fraction of coordinates falling within the ROI was computed and compared with the fraction expected if coordinates for the behavior were not clustered, i.e., uniformly distributed. When the difference between these fractions is large behavioral association is indicated. A z-score ≥ 3.0 was used to designate statistically significant behavioral association. The left-right symmetry of ~100K activation foci was evaluated by hemisphere, lobe, and by behavioral sub-domain. Results highlighted the classic left-side dominance for language while asymmetry for most sub-domains (~75%) was not statistically significant. Use scenarios were presented for anatomical ROIs from the Harvard-Oxford cortical (HOC) brain atlas, functional ROIs from statistical parametric maps in a TMS-PET study, a task-based fMRI study, and ROIs from the ten "major representative" functional networks in a previously published resting state fMRI study. Statistically significant behavioral findings for these use scenarios were consistent with published behaviors for associated anatomical and functional regions.

  2. Market Analysis and Consumer Impacts Source Document. Part III. Consumer Behavior and Attitudes Toward Fuel Efficient Vehicles

    Science.gov (United States)

    1980-12-01

    This source document on motor vehicle market analysis and consumer impacts consists of three parts. Part III consists of studies and reviews on: consumer awareness of fuel efficiency issues; consumer acceptance of fuel efficient vehicles; car size ch...

  3. Automatic comic page image understanding based on edge segment analysis

    Science.gov (United States)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  4. Morphometric image analysis of giant vesicles

    DEFF Research Database (Denmark)

    Husen, Peter Rasmussen; Arriaga, Laura; Monroy, Francisco

    2012-01-01

    We have developed a strategy to determine lengths and orientations of tie lines in the coexistence region of liquid-ordered and liquid-disordered phases of cholesterol containing ternary lipid mixtures. The method combines confocal-fluorescence-microscopy image stacks of giant unilamellar vesicles...... (GUVs), a dedicated 3D-image analysis, and a quantitative analysis based in equilibrium thermodynamic considerations. This approach was tested in GUVs composed of 1,2-dioleoyl-sn-glycero-3-phosphocholine/1,2-palmitoyl-sn-glycero-3-phosphocholine/cholesterol. In general, our results show a reasonable...

  5. Multispectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2012-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. The pellets were divided into two groups: one with pellets coated using synthetic astaxanthin in fish oil and the other with pellets coate...... products with optimal use of pigment and minimum amount of waste.......Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. The pellets were divided into two groups: one with pellets coated using synthetic astaxanthin in fish oil and the other with pellets coated...

  6. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    Energy Technology Data Exchange (ETDEWEB)

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  7. A document analysis of drowning prevention education resources in the United States.

    Science.gov (United States)

    Katchmarchi, Adam Bradley; Taliaferro, Andrea R; Kipfer, Hannah Joy

    2018-03-01

    There have been long-standing calls to better educate the public at large on risks of drowning; yet limited evaluation has taken place on current resources in circulation. The purpose of this qualitative research is to develop an understanding of the content in currently circulated drowning prevention resources in the United States. Data points (n = 451) consisting of specific content within 25 different drowning prevention educational resources were analyzed using document analysis methods; a grounded theory approach was employed to allow for categorical development and indexing of the data. Results revealed six emerging categories, including safety precautions (n = 152), supervision (n = 109), preventing access (n = 57), safety equipment (n = 46), emergency procedures (n = 46), and aquatic education (n = 41). Results provide an initial insight into the composition of drowning prevention resources in the United States and provide a foundation for future research.

  8. Canister storage building (CSB) safety analysis report phase 3: Safety analysis documentation supporting CSB construction

    International Nuclear Information System (INIS)

    Garvin, L.J.

    1997-01-01

    The Canister Storage Building (CSB) will be constructed in the 200 East Area of the U.S. Department of Energy (DOE) Hanford Site. The CSB will be used to stage and store spent nuclear fuel (SNF) removed from the Hanford Site K Basins. The objective of this chapter is to describe the characteristics of the site on which the CSB will be located. This description will support the hazard analysis and accident analyses in Chapter 3.0. The purpose of this report is to provide an evaluation of the CSB design criteria, the design's compliance with the applicable criteria, and the basis for authorization to proceed with construction of the CSB

  9. Fourier analysis: from cloaking to imaging

    International Nuclear Information System (INIS)

    Wu, Kedi; Ping Wang, Guo; Cheng, Qiluan

    2016-01-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers. (review)

  10. Fourier analysis: from cloaking to imaging

    Science.gov (United States)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  11. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  12. Hyperspectral Image Analysis of Food Quality

    DEFF Research Database (Denmark)

    Arngren, Morten

    inspection.Near-infrared spectroscopy can address these issues by offering a fast and objectiveanalysis of the food quality. A natural extension to these single spectrumNIR systems is to include image information such that each pixel holds a NIRspectrum. This augmented image information offers several......Assessing the quality of food is a vital step in any food processing line to ensurethe best food quality and maximum profit for the farmer and food manufacturer.Traditional quality evaluation methods are often destructive and labourintensive procedures relying on wet chemistry or subjective human...... extensions to the analysis offood quality. This dissertation is concerned with hyperspectral image analysisused to assess the quality of single grain kernels. The focus is to highlight thebenefits and challenges of using hyperspectral imaging for food quality presentedin two research directions. Initially...

  13. Curvelet based offline analysis of SEM images.

    Directory of Open Access Journals (Sweden)

    Syed Hamad Shirazi

    Full Text Available Manual offline analysis, of a scanning electron microscopy (SEM image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM. The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm.

  14. Data Analysis Strategies in Medical Imaging.

    Science.gov (United States)

    Parmar, Chintan; Barry, Joseph D; Hosny, Ahmed; Quackenbush, John; Aerts, Hugo Jwl

    2018-03-26

    Radiographic imaging continues to be one of the most effective and clinically useful tools within oncology. Sophistication of artificial intelligence (AI) has allowed for detailed quantification of radiographic characteristics of tissues using predefined engineered algorithms or deep learning methods. Precedents in radiology as well as a wealth of research studies hint at the clinical relevance of these characteristics. However, there are critical challenges associated with the analysis of medical imaging data. While some of these challenges are specific to the imaging field, many others like reproducibility and batch effects are generic and have already been addressed in other quantitative fields such as genomics. Here, we identify these pitfalls and provide recommendations for analysis strategies of medical imaging data including data normalization, development of robust models, and rigorous statistical analyses. Adhering to these recommendations will not only improve analysis quality, but will also enhance precision medicine by allowing better integration of imaging data with other biomedical data sources. Copyright ©2018, American Association for Cancer Research.

  15. Risk D and D Rapid Prototype: Scenario Documentation and Analysis Tool

    International Nuclear Information System (INIS)

    Unwin, Stephen D.; Seiple, Timothy E.

    2009-01-01

    Report describes process and methodology associated with a rapid prototype tool for integrating project risk analysis and health and safety risk analysis for decontamination and decommissioning projects. The objective of the Decontamination and Decommissioning (D and D) Risk Management Evaluation and Work Sequencing Standardization Project under DOE EM-23 is to recommend or develop practical risk-management tools for decommissioning of nuclear facilities. PNNL has responsibility under this project for recommending or developing computer-based tools that facilitate the evaluation of risks in order to optimize the sequencing of D and D work. PNNL's approach is to adapt, augment, and integrate existing resources rather than to develop a new suite of tools. Methods for the evaluation of H and S risks associated with work in potentially hazardous environments are well-established. Several approaches exist which, collectively, are referred to as process hazard analysis (PHA). A PHA generally involves the systematic identification of accidents, exposures, and other adverse events associated with a given process or work flow. This identification process is usually achieved in a brainstorming environment or by other means of eliciting informed opinion. The likelihoods of adverse events (scenarios) and their associated consequence severities are estimated against pre-defined scales, based on which risk indices are then calculated. A similar process is encoded in various project risk software products that facilitate the quantification of schedule and cost risks associated with adverse scenarios. However, risk models do not generally capture both project risk and H and S risk. The intent of the project reported here is to produce a tool that facilitates the elicitation, characterization, and documentation of both project risk and H and S risk based on defined sequences of D and D activities. By considering alternative D and D sequences, comparison of the predicted risks can

  16. WEB NEWS DOCUMENTS CLUSTERING IN INDONESIAN LANGUAGE USING SINGULAR VALUE DECOMPOSITION-PRINCIPAL COMPONENT ANALYSIS (SVDPCA AND ANT ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Arif Fadllullah

    2016-02-01

    Full Text Available Ant-based document clustering is a cluster method of measuring text documents similarity based on the shortest path between nodes (trial phase and determines the optimal clusters of sequence document similarity (dividing phase. The processing time of trial phase Ant algorithms to make document vectors is very long because of high dimensional Document-Term Matrix (DTM. In this paper, we proposed a document clustering method for optimizing dimension reduction using Singular Value Decomposition-Principal Component Analysis (SVDPCA and Ant algorithms. SVDPCA reduces size of the DTM dimensions by converting freq-term of conventional DTM to score-pc of Document-PC Matrix (DPCM. Ant algorithms creates documents clustering using the vector space model based on the dimension reduction result of DPCM. The experimental results on 506 news documents in Indonesian language demonstrated that the proposed method worked well to optimize dimension reduction up to 99.7%. We could speed up execution time efficiently of the trial phase and maintain the best F-measure achieved from experiments was 0.88 (88%.

  17. The Role of Development Agencies in Touristic Branding of Cities, A Document Analysis on Regional Plans

    Directory of Open Access Journals (Sweden)

    Emrah ÖZKUL

    2012-12-01

    Full Text Available The objective of present research is to determine the role of development agencies in which the branding of cities in the region. At research, the role of development agencies; identification of unknown tourist values, determination and improving of deficiencies, opportunities, were investigated in accordance with the agency's goals and objectives. To achieve this goal used in document analysis from qualitative research methods and Regional Plans were investigated which was published by the Development Agencies. The data obtained were subjected to descriptive analysis, in the case of some unidentified concepts by going in-depth content analysis. Despite all the advantages of having Turkey, made enough promotion of national and international level many regions in Turkey and so the tourism industry has led to overshadowed by the industrial and agricultural sectors. For this reason, development agencies determining the values of regional tourism have undertaken to task of changing perceptions of tourist consumers with their targeted projects on behalf of perform the city branding. Thus, was concluded that cities could become a center of attraction and the brand both investors and visitors.

  18. Final Safety Analysis Document for Building 693 Chemical Waste Storage Building at Lawrence Livermore National Laboratory

    International Nuclear Information System (INIS)

    Salazar, R.J.; Lane, S.

    1992-02-01

    This Safety Analysis Document (SAD) for the Lawrence Livermore National Laboratory (LLNL) Building 693, Chemical Waste Storage Building (desipated as Building 693 Container Storage Unit in the Laboratory's RCRA Part B permit application), provides the necessary information and analyses to conclude that Building 693 can be operated at low risk without unduly endangering the safety of the building operating personnel or adversely affecting the public or the environment. This Building 693 SAD consists of eight sections and supporting appendices. Section 1 presents a summary of the facility designs and operations and Section 2 summarizes the safety analysis method and results. Section 3 describes the site, the facility desip, operations and management structure. Sections 4 and 5 present the safety analysis and operational safety requirements (OSRs). Section 6 reviews Hazardous Waste Management's (HWM) Quality Assurance (QA) program. Section 7 lists the references and background material used in the preparation of this report Section 8 lists acronyms, abbreviations and symbols. Appendices contain supporting analyses, definitions, and descriptions that are referenced in the body of this report

  19. Canister storage building (CSB) safety analysis report phase 3: Safety analysis documentation supporting CSB construction

    Energy Technology Data Exchange (ETDEWEB)

    Garvin, L.J.

    1997-04-28

    The Canister Storage Building (CSB) will be constructed in the 200 East Area of the U.S. Department of Energy (DOE) Hanford Site. The CSB will be used to stage and store spent nuclear fuel (SNF) removed from the Hanford Site K Basins. The objective of this chapter is to describe the characteristics of the site on which the CSB will be located. This description will support the hazard analysis and accident analyses in Chapter 3.0. The purpose of this report is to provide an evaluation of the CSB design criteria, the design's compliance with the applicable criteria, and the basis for authorization to proceed with construction of the CSB.

  20. Measuring toothbrush interproximal penetration using image analysis

    Science.gov (United States)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  1. A virtual laboratory for medical image analysis

    NARCIS (Netherlands)

    Olabarriaga, Sílvia D.; Glatard, Tristan; de Boer, Piter T.

    2010-01-01

    This paper presents the design, implementation, and usage of a virtual laboratory for medical image analysis. It is fully based on the Dutch grid, which is part of the Enabling Grids for E-sciencE (EGEE) production infrastructure and driven by the gLite middleware. The adopted service-oriented

  2. Scanning transmission electron microscopy imaging and analysis

    CERN Document Server

    Pennycook, Stephen J

    2011-01-01

    Provides the first comprehensive treatment of the physics and applications of this mainstream technique for imaging and analysis at the atomic level Presents applications of STEM in condensed matter physics, materials science, catalysis, and nanoscience Suitable for graduate students learning microscopy, researchers wishing to utilize STEM, as well as for specialists in other areas of microscopy Edited and written by leading researchers and practitioners

  3. Using Image Analysis to Build Reading Comprehension

    Science.gov (United States)

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  4. Systematic documentation and analysis of human genetic variation using the microattribution approach

    Science.gov (United States)

    Giardine, Belinda; Borg, Joseph; Higgs, Douglas R.; Peterson, Kenneth R.; Maglott, Donna; Basak, A. Nazli; Clark, Barnaby; Faustino, Paula; Felice, Alex E.; Francina, Alain; Gallivan, Monica V. E.; Georgitsi, Marianthi; Gibbons, Richard J.; Giordano, Piero C.; Harteveld, Cornelis L.; Joly, Philippe; Kanavakis, Emmanuel; Kollia, Panagoula; Menzel, Stephan; Miller, Webb; Moradkhani, Kamran; Old, John; Papachatzopoulou, Adamantia; Papadakis, Manoussos N.; Papadopoulos, Petros; Pavlovic, Sonja; Philipsen, Sjaak; Radmilovic, Milena; Riemer, Cathy; Schrijver, Iris; Stojiljkovic, Maja; Thein, Swee Lay; Traeger-Synodinos, Jan; Tully, Ray; Wada, Takahito; Waye, John; Wiemann, Claudia; Zukic, Branka; Chui, David H. K.; Wajcman, Henri; Hardison, Ross C.; Patrinos, George P.

    2013-01-01

    We developed a series of interrelated locus-specific databases to store all published and unpublished genetic variation related to these disorders, and then implemented microattribution to encourage submission of unpublished observations of genetic variation to these public repositories 1. A total of 1,941 unique genetic variants in 37 genes, encoding globins (HBA2, HBA1, HBG2, HBG1, HBD, HBB) and other erythroid proteins (ALOX5AP, AQP9, ARG2, ASS1, ATRX, BCL11A, CNTNAP2, CSNK2A1, EPAS1, ERCC2, FLT1, GATA1, GPM6B, HAO2, HBS1L, KDR, KL, KLF1, MAP2K1, MAP3K5, MAP3K7, MYB, NOS1, NOS2, NOS3, NOX3, NUP133, PDE7B, SMAD3, SMAD6, and TOX) are currently documented in these databases with reciprocal attribution of microcitations to data contributors. Our project provides the first example of implementing microattribution to incentivise submission of all known genetic variation in a defined system. It has demonstrably increased the reporting of human variants and now provides a comprehensive online resource for systematically describing human genetic variation in the globin genes and other genes contributing to hemoglobinopathies and thalassemias. The large repository of previously reported data, together with more recent data, acquired by microattribution, demonstrates how the comprehensive documentation of human variation will provide key insights into normal biological processes and how these are perturbed in human genetic disease. Using the microattribution process set out here, datasets which took decades to accumulate for the globin genes could be assembled rapidly for other genes and disease systems. The principles established here for the globin gene system will serve as a model for other systems and the analysis of other common and/or complex human genetic diseases. PMID:21423179

  5. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  6. Automated Aesthetic Analysis of Photographic Images.

    Science.gov (United States)

    Aydın, Tunç Ozan; Smolic, Aljoscha; Gross, Markus

    2015-01-01

    We present a perceptually calibrated system for automatic aesthetic evaluation of photographic images. Our work builds upon the concepts of no-reference image quality assessment, with the main difference being our focus on rating image aesthetic attributes rather than detecting image distortions. In contrast to the recent attempts on the highly subjective aesthetic judgment problems such as binary aesthetic classification and the prediction of an image's overall aesthetics rating, our method aims on providing a reliable objective basis of comparison between aesthetic properties of different photographs. To that end our system computes perceptually calibrated ratings for a set of fundamental and meaningful aesthetic attributes, that together form an "aesthetic signature" of an image. We show that aesthetic signatures can still be used to improve upon the current state-of-the-art in automatic aesthetic judgment, but also enable interesting new photo editing applications such as automated aesthetic analysis, HDR tone mapping evaluation, and providing aesthetic feedback during multi-scale contrast manipulation.

  7. Local development strategies for inner areas in Italy. A comparative analysis based on plan documents

    Directory of Open Access Journals (Sweden)

    Gabriella Punziano

    2016-12-01

    Full Text Available Within the huge literature on local development policies produced across different disciplines, comparatively little attention has been paid to an important element as relevant as economic, financial and social capital: the cognitive element, needed in strategic thinking and complexity management, the “collective brain” guiding the decision-making process. In this paper, we investigate what we consider a direct “proxy” for this variable, which is supposed to incorporate that “usable knowledge” assisting those making policy choices: language. Language shapes the way problems are conceived, fixes priorities and delimits the range of strategic options. More specifically, our research question inquires which contextual factors are at stake in local development strategy design. The case studies were chosen among the pilot areas included in the Italian “National Strategy for Inner Areas”. Through a multidimensional content analysis of the plan documents available online, we explored the ways in which development strategies are locally interpreted. The techniques we used allowed us to make a comparative analysis, testing three effects that could have influenced local policy design: a geographical effect, a concept/policy transfer effect, and a framing effect. Broader, interesting reflections were drawn from research findings on the local embedded ability to designing consistent and effective development strategies.

  8. Computed image analysis of neutron radiographs

    International Nuclear Information System (INIS)

    Dinca, M.; Anghel, E.; Preda, M.; Pavelescu, M.

    2008-01-01

    Similar with X-radiography, using neutron like penetrating particle, there is in practice a nondestructive technique named neutron radiology. When the registration of information is done on a film with the help of a conversion foil (with high cross section for neutrons) that emits secondary radiation (β,γ) that creates a latent image, the technique is named neutron radiography. A radiographic industrial film that contains the image of the internal structure of an object, obtained by neutron radiography, must be subsequently analyzed to obtain qualitative and quantitative information about the structural integrity of that object. There is possible to do a computed analysis of a film using a facility with next main components: an illuminator for film, a CCD video camera and a computer (PC) with suitable software. The qualitative analysis intends to put in evidence possibly anomalies of the structure due to manufacturing processes or induced by working processes (for example, the irradiation activity in the case of the nuclear fuel). The quantitative determination is based on measurements of some image parameters: dimensions, optical densities. The illuminator has been built specially to perform this application but can be used for simple visual observation. The illuminated area is 9x40 cm. The frame of the system is a comparer of Abbe Carl Zeiss Jena type, which has been adapted to achieve this application. The video camera assures the capture of image that is stored and processed by computer. A special program SIMAG-NG has been developed at INR Pitesti that beside of the program SMTV II of the special acquisition module SM 5010 can analyze the images of a film. The major application of the system was the quantitative analysis of a film that contains the images of some nuclear fuel pins beside a dimensional standard. The system was used to measure the length of the pellets of the TRIGA nuclear fuel. (authors)

  9. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  10. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  11. Quantitative Image Simulation and Analysis of Nanoparticles

    DEFF Research Database (Denmark)

    Madsen, Jacob; Hansen, Thomas Willum

    Microscopy (HRTEM) has become a routine analysis tool for structural characterization at atomic resolution, and with the recent development of in-situ TEMs, it is now possible to study catalytic nanoparticles under reaction conditions. However, the connection between an experimental image, and the underlying...... of strain measurements from TEM images, and investigate the stability of these measurements to microscope parameters. This is followed by our efforts toward simulating metal nanoparticles on a metal-oxide support using the Charge Optimized Many Body (COMB) interatomic potential. The simulated interface...

  12. Study of TCP densification via image analysis

    International Nuclear Information System (INIS)

    Silva, R.C.; Alencastro, F.S.; Oliveira, R.N.; Soares, G.A.

    2011-01-01

    Among ceramic materials that mimic human bone, β-type tri-calcium phosphate (β-TCP) has shown appropriate chemical stability and superior resorption rate when compared to hydroxyapatite. In order to increase its mechanical strength, the material is sintered, under controlled time and temperature conditions, to obtain densification without phase change. In the present work, tablets were produced via uniaxial compression and then sintered at 1150°C for 2h. The analysis via XRD and FTIR showed that the sintered tablets were composed only by β-TCP. The SEM images were used for quantification of grain size and volume fraction of pores, via digital image analysis. The tablets showed small pore fraction (between 0,67% and 6,38%) and homogeneous grain size distribution (∼2μm). Therefore, the analysis method seems viable to quantify porosity and grain size. (author)

  13. Analysis Relationship Among Descriptor, References and Citation to Contruct the Inherent Structure of Document Collection

    International Nuclear Information System (INIS)

    Hasibuan, Zainal A.; Mustangimah

    2001-01-01

    There are many characteristics can be used to identify a document which cover characteristics of the documents, cited documents, and citing documents This research explored the inherent structure of a document collection as one of main components of information retrieval system. The characteristics examined are: descriptors, references (cited documents), and citations (citing documents). Three independent variables were studied: co-descriptor, bibliographic coupling, and co-citation. A test collection was constructed by searching on a single descriptor i nformation retrieval i n the CD-ROM version of Education Resource Information Clearinghouse (ERIC), covering the period 1981 through 1985. Descriptors were extracted from ERIC; cited and citing documents associated with the test collection were derived from Social Sciences Citation Index (SSCI), covering the period 1981 through 1990. Three hypothesis were tested in this study, that are: (1) the higher the frequency of co-descriptors between documents, the higher the frequencies of their bibliographic coupling and co-citation; (2) the higher the frequency of bibliographic coupling between documents, the higher the frequencies of their co-citation and co-descriptors; and (3) the higher the frequency of co-citation between documents, the higher the frequencies of their co-descriptors and bibliographic coupling. The results showed that all of three hypothesis are supported statistically and there is a significant linear relationship among the observed variables. It is mean that there is a significant relationship among descriptors, references, and citation, so that it can be used to construct the inherent structure of document collection in order to improve information retrieval system performance

  14. Context-based coding of bilevel images enhanced by digital straight line analysis

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    A new efficient compression scheme for bilevel images containing locally straight edges is presented. This paper is especially focused on lossless (intra) coding of binary shapes for image and video objects, but other images with similar characteristics such as line drawings, layers of digital maps......, or segmentation maps are also encoded efficiently. The algorithm is not targeted at document images with text, which can be coded efficiently with dictionary-based techniques as in JBIG2. The scheme is based on a local analysis of the digital straightness of the causal part of the object boundary, which is used...

  15. Analysis of renal nuclear medicine images

    International Nuclear Information System (INIS)

    Jose, R.M.J.

    2000-01-01

    Nuclear medicine imaging of the renal system involves producing time-sequential images showing the distribution of a radiopharmaceutical in the renal system. Producing numerical and graphical data from nuclear medicine studies requires defining regions of interest (ROIs) around various organs within the field of view, such as the left kidney, right kidney and bladder. Automating this process has several advantages: a saving of a clinician's time; enhanced objectivity and reproducibility. This thesis describes the design, implementation and assessment of an automatic ROI generation system. The performance of the system described in this work is assessed by comparing the results to those obtained using manual techniques. Since nuclear medicine images are inherently noisy, the sequence of images is reconstructed using the first few components of a principal components analysis in order to reduce the noise in the images. An image of the summed reconstructed sequence is then formed. This summed image is segmented by using an edge co-occurrence matrix as a feature space for simultaneously classifying regions and locating boundaries. Two methods for assigning the regions of a segmented image to organ class labels are assessed. The first method is based on using Dempster-Shafer theory to combine uncertain evidence from several sources into a single evidence; the second method makes use of a neural network classifier. The use of each technique in classifying the regions of a segmented image are assessed in separate experiments using 40 real patient-studies. A comparative assessment of the two techniques shows that the neural network produces more accurate region labels for the kidneys. The optimum neural system is determined experimentally. Results indicate that combining temporal and spatial information with a priori clinical knowledge produces reasonable ROIs. Consistency in the neural network assignment of regions is enhanced by taking account of the contextual

  16. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  17. Quantitative Analysis in Nuclear Medicine Imaging

    CERN Document Server

    2006-01-01

    This book provides a review of image analysis techniques as they are applied in the field of diagnostic and therapeutic nuclear medicine. Driven in part by the remarkable increase in computing power and its ready and inexpensive availability, this is a relatively new yet rapidly expanding field. Likewise, although the use of radionuclides for diagnosis and therapy has origins dating back almost to the discovery of natural radioactivity itself, radionuclide therapy and, in particular, targeted radionuclide therapy has only recently emerged as a promising approach for therapy of cancer and, to a lesser extent, other diseases. As effort has, therefore, been made to place the reviews provided in this book in a broader context. The effort to do this is reflected by the inclusion of introductory chapters that address basic principles of nuclear medicine imaging, followed by overview of issues that are closely related to quantitative nuclear imaging and its potential role in diagnostic and therapeutic applications. ...

  18. An image analyzer system for the analysis of nuclear traces

    International Nuclear Information System (INIS)

    Cuapio O, A.

    1990-10-01

    Inside the project of nuclear traces and its application techniques to be applied in the detection of nuclear reactions of low section (non detectable by conventional methods), in the study of accidental and personal neutron dosemeters, and other but, are developed. All these studies are based on the fact that the charged particles leave latent traces of dielectric that if its are engraved with appropriate chemical solutions its are revealed until becoming visible to the optical microscope. From the analysis of the different trace forms, it is possible to obtain information of the characteristic parameters of the incident particles (charge, mass and energy). Of the density of traces it is possible to obtain information of the flow of the incident radiation and consequently of the received dose. For carry out this analysis has been designed and coupled different systems, that it has allowed the solution of diverse outlined problems. Notwithstanding it has been detected that to make but versatile this activity is necessary to have an Image Analyzer System that allow us to digitize, to process and to display the images with more rapidity. The present document, presents the proposal to carry out the acquisition of the necessary components for to assembling an Image Analyzing System, like support to the mentioned project. (Author)

  19. Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox

    Directory of Open Access Journals (Sweden)

    Andre Santos Ribeiro

    2015-07-01

    Full Text Available Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity.Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI and positron emission tomography (PET. It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also.Results. It was observed both a high inter

  20. Teaching Integrity in Empirical Research: A Protocol for Documenting Data Management and Analysis

    Science.gov (United States)

    Ball, Richard; Medeiros, Norm

    2012-01-01

    This article describes a protocol the authors developed for teaching undergraduates to document their statistical analyses for empirical research projects so that their results are completely reproducible and verifiable. The protocol is guided by the principle that the documentation prepared to accompany an empirical research project should be…

  1. Language Learning in the Public Eye: An Analysis of Newspapers and Official Documents in England

    Science.gov (United States)

    Graham, Suzanne; Santos, Denise

    2015-01-01

    This article considers the issue of low levels of motivation for foreign language learning in England by exploring how language learning is conceptualised by different key voices in that country through the examination of written data: policy documents and reports on the UK's language needs, curriculum documents and press articles. The extent to…

  2. Single particle raster image analysis of diffusion.

    Science.gov (United States)

    Longfils, M; Schuster, E; Lorén, N; Särkkä, A; Rudemo, M

    2017-04-01

    As a complement to the standard RICS method of analysing Raster Image Correlation Spectroscopy images with estimation of the image correlation function, we introduce the method SPRIA, Single Particle Raster Image Analysis. Here, we start by identifying individual particles and estimate the diffusion coefficient for each particle by a maximum likelihood method. Averaging over the particles gives a diffusion coefficient estimate for the whole image. In examples both with simulated and experimental data, we show that the new method gives accurate estimates. It also gives directly standard error estimates. The method should be possible to extend to study heterogeneous materials and systems of particles with varying diffusion coefficient, as demonstrated in a simple simulation example. A requirement for applying the SPRIA method is that the particle concentration is low enough so that we can identify the individual particles. We also describe a bootstrap method for estimating the standard error of standard RICS. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  3. DOE Integrated Safeguards and Security (DISS) historical document archival and retrieval analysis, requirements and recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Guyer, H.B.; McChesney, C.A.

    1994-10-07

    The overall primary Objective of HDAR is to create a repository of historical personnel security documents and provide the functionality needed for archival and retrieval use by other software modules and application users of the DISS/ET system. The software product to be produced from this specification is the Historical Document Archival and Retrieval Subsystem The product will provide the functionality to capture, retrieve and manage documents currently contained in the personnel security folders in DOE Operations Offices vaults at various locations across the United States. The long-term plan for DISS/ET includes the requirement to allow for capture and storage of arbitrary, currently undefined, clearance-related documents that fall outside the scope of the ``cradle-to-grave`` electronic processing provided by DISS/ET. However, this requirement is not within the scope of the requirements specified in this document.

  4. Supporting documents for LLL area 27 (410 area) safety analysis reports, Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    Odell, B. N. [comp.

    1977-02-01

    The following appendices are common to the LLL Safety Analysis Reports Nevada Test Site and are included here as supporting documents to those reports: Environmental Monitoring Report for the Nevada Test Site and Other Test Areas Used for Underground Nuclear Detonations, U. S. Environmental Protection Agency, Las Vegas, Rept. EMSL-LV-539-4 (1976); Selected Census Information Around the Nevada Test Site, U. S. Environmental Protection Agency, Las Vegas, Rept. NERC-LV-539-8 (1973); W. J. Hannon and H. L. McKague, An Examination of the Geology and Seismology Associated with Area 410 at the Nevada Test Site, Lawrence Livermore Laboratory, Livermore, Rept. UCRL-51830 (1975); K. R. Peterson, Diffusion Climatology for Hypothetical Accidents in Area 410 of the Nevada Test Site, Lawrence Livermore Laboratory, Livermore, Rept. UCRL-52074 (1976); J. R. McDonald, J. E. Minor, and K. C. Mehta, Development of a Design Basis Tornado and Structural Design Criteria for the Nevada Test Site, Nevada, Lawrence Livermore Laboratory, Livermore, Rept. UCRL-13668 (1975); A. E. Stevenson, Impact Tests of Wind-Borne Wooden Missiles, Sandia Laboratories, Tonopah, Rept. SAND 76-0407 (1976); and Hydrology of the 410 Area (Area 27) at the Nevada Test Site.

  5. Documental analysis of Brazilian academic production about evolution teaching (1990-2010: characterization and proposals

    Directory of Open Access Journals (Sweden)

    Caio Samuel Franciscati da Silva

    2013-08-01

    Full Text Available The quantitative and qualitative increasing of researches in Science teaching imposes the periodical mapping need of scientific production about the subject, with a view to identify its own characteristics and tendencies. In this context, researches such as “State of art”, given his executrix character, constitute in modality inquiring that allow us to outline historical scenes for a given area (or subarea of knowledge. In this light, this work aims to outline the panorama of Brazilian academic production, represented by dissertations and thesis, in evolution teaching between 1990- 2010. The documents susceptible of analysis were raised in three online data basis and its selection happened from the reading of titles, abstracts and key-words with views to identify the dissertations and thesis that truly approached the Evolution Teaching. The found results evidence the predominance of dissertations related to the number of thesis and the concentration of academic production in Evolution Teaching in Brazil south-east region, especially in São Paulo state. In researches trends we verified the prevalence of investigations related to the previous conceptions of students and professors (in all teaching levels and to teacher training.

  6. LEARNING STYLES BASED ADAPTIVE INTELLIGENT TUTORING SYSTEMS: DOCUMENT ANALYSIS OF ARTICLES PUBLISHED BETWEEN 2001. AND 2016.

    Directory of Open Access Journals (Sweden)

    Amit Kumar

    2017-12-01

    Full Text Available Actualizing instructional intercessions to suit learner contrasts has gotten extensive consideration. Among these individual contrast factors, the observational confirmation in regards to the academic benefit of learning styles has been addressed, yet the examination on the issue proceeds. Late improvements in web-based executions have driven researchers to re-examine the learning styles in adaptive tutoring frameworks. Adaptivity in intelligent tutoring systems is strongly influenced by the learning style of a learner. This study involved extensive document analysis of adaptive tutoring systems based on learning styles. Seventy-eight studies in literature from 2001 to 2016 were collected and classified under select parameters such as main focus, purpose, research types, methods, types and levels of participants, field/area of application, learner modelling, data gathering tools used and research findings. The current studies reveal that majority of the studies defined a framework or architecture of adaptive intelligent tutoring system (AITS while others focused on impact of AITS on learner satisfaction and academic outcomes. Currents trends, gaps in literature and ications were discussed.

  7. A study on family-school cooperation based on an analysis of school documentation

    Directory of Open Access Journals (Sweden)

    Polovina Nada

    2007-01-01

    Full Text Available Family-school cooperation is a very complex process that can be studied at different levels in a number of different ways. This study has covered only some aspects of cooperation between parents and teachers, based on school documentation of a Belgrade elementary school. The study covered analyses of 60 Attendance Registers pertaining to 60 classes with 1289 students from Grade 1 through Grade 8 during an academic year. The unit of analysis included: parents attendance at PTA meetings and individual meetings between parents and teachers. In addition to the frequency of parents’ visits to school, the relationship between such registered parents' visits and overall academic performance, grades in conduct, excused and unexcused absence from classes were also considered. The research findings indicated interference between development factors (attitude change in parent-child relationship and growing-up and parents’ informal "theory of critical grades" i.e. transitional processes in schooling. The findings confirmed that parents’ individual visits to school were mainly meant to offer an excuse for the student’s absence from school, while attendance at PTA meetings was linked to poor grades in conduct and missed classes (both excused and unexcused. The findings also showed that parents pursued visiting strategies which were pragmatic, less time-consuming and less emotionally draining ones. The closing part refers to discussions on practical use of the study and possible further research. .

  8. Analysis of Informed Consent Document Utilization in a Minimal-Risk Genetic Study

    Science.gov (United States)

    Desch, Karl; Li, Jun; Kim, Scott; Laventhal, Naomi; Metzger, Kristen; Siemieniak, David; Ginsburg, David

    2012-01-01

    Background The signed informed consent document certifies that the process of informed consent has taken place and provides research participants with comprehensive information about their role in the study. Despite efforts to optimize the informed consent document, only limited data are available about the actual use of consent documents by participants in biomedical research. Objective To examine the use of online consent documents in a minimal-risk genetic study. Design Prospective sibling cohort enrolled as part of a genetic study of hematologic and common human traits. Setting University of Michigan Campus, Ann Arbor, Michigan. Participants Volunteer sample of healthy persons with 1 or more eligible siblings aged 14 to 35 years. Enrollment was through targeted e-mail to student lists. A total of 1209 persons completed the study. Measurements Time taken by participants to review a 2833-word online consent document before indicating consent and identification of a masked hyperlink near the end of the document. Results The minimum predicted reading time was 566 seconds. The median time to consent was 53 seconds. A total of 23% of participants consented within 10 seconds, and 93% of participants consented in less than the minimum predicted reading time. A total of 2.5% of participants identified the masked hyperlink. Limitation The online consent process was not observed directly by study investigators, and some participants may have viewed the consent document more than once. Conclusion Few research participants thoroughly read the consent document before agreeing to participate in this genetic study. These data suggest that current informed consent documents, particularly for low-risk studies, may no longer serve the intended purpose of protecting human participants, and the role of these documents should be reassessed. Primary Funding Source National Institutes of Health. PMID:21893624

  9. Analysis of informed consent document utilization in a minimal-risk genetic study.

    Science.gov (United States)

    Desch, Karl; Li, Jun; Kim, Scott; Laventhal, Naomi; Metzger, Kristen; Siemieniak, David; Ginsburg, David

    2011-09-06

    The signed informed consent document certifies that the process of informed consent has taken place and provides research participants with comprehensive information about their role in the study. Despite efforts to optimize the informed consent document, only limited data are available about the actual use of consent documents by participants in biomedical research. To examine the use of online consent documents in a minimal-risk genetic study. Prospective sibling cohort enrolled as part of a genetic study of hematologic and common human traits. University of Michigan Campus, Ann Arbor, Michigan. Volunteer sample of healthy persons with 1 or more eligible siblings aged 14 to 35 years. Enrollment was through targeted e-mail to student lists. A total of 1209 persons completed the study. Time taken by participants to review a 2833-word online consent document before indicating consent and identification of a masked hyperlink near the end of the document. The minimum predicted reading time was 566 seconds. The median time to consent was 53 seconds. A total of 23% of participants consented within 10 seconds, and 93% of participants consented in less than the minimum predicted reading time. A total of 2.5% of participants identified the masked hyperlink. The online consent process was not observed directly by study investigators, and some participants may have viewed the consent document more than once. Few research participants thoroughly read the consent document before agreeing to participate in this genetic study. These data suggest that current informed consent documents, particularly for low-risk studies, may no longer serve the intended purpose of protecting human participants, and the role of these documents should be reassessed. National Institutes of Health.

  10. Pain related inflammation analysis using infrared images

    Science.gov (United States)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  11. Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document

    Energy Technology Data Exchange (ETDEWEB)

    1988-12-15

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source. The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.

  12. Semiautomatic digital imaging system for cytogenetic analysis

    International Nuclear Information System (INIS)

    Chaubey, R.C.; Chauhan, P.C.; Bannur, S.V.; Kulgod, S.V.; Chadda, V.K.; Nigam, R.K.

    1999-08-01

    The paper describes a digital image processing system, developed indigenously at BARC for size measurement of microscopic biological objects such as cell, nucleus and micronucleus in mouse bone marrow; cytochalasin-B blocked human lymphocytes in-vitro; numerical counting and karyotyping of metaphase chromosomes of human lymphocytes. Errors in karyotyping of chromosomes by the imaging system may creep in due to lack of well-defined position of centromere or extensive bending of chromosomes, which may result due to poor quality of preparation. Good metaphase preparations are mandatory for precise and accurate analysis by the system. Additional new morphological parameters about each chromosome have to be incorporated to improve the accuracy of karyotyping. Though the experienced cytogenetisist is the final judge; however, the system assists him/her to carryout analysis much faster as compared to manual scoring. Further, experimental studies are in progress to validate different software packages developed for various cytogenetic applications. (author)

  13. Image Processing and Analysis in Geotechnical Investigation

    Czech Academy of Sciences Publication Activity Database

    Ščučka, Jiří; Martinec, Petr; Šňupárek, Richard; Veselý, V.

    2006-01-01

    Roč. 21, 3-4 (2006), s. 1-6 ISSN 0886-7798. [AITES-ITA 2006 World Tunnel Congres and ITA General Assembly /32./. Seoul, 22.04.2006-27.04.2006] Institutional research plan: CEZ:AV0Z30860518 Keywords : underground working face * digital photography * image analysis Subject RIV: DB - Geology ; Mineralogy Impact factor: 0.278, year: 2006

  14. Promoting Entrepreneurship in Higher Education: Analysis of European Union Documents and Lithuanian Case Study

    Directory of Open Access Journals (Sweden)

    Viktorija Stokaitė

    2013-01-01

    Full Text Available The Chairman of the European Commission J.M Barroso, as the main “Europe 2020” strategic target for the upcoming ten years, indicates the creation of an innovative, stable and integrated economy. Higher education and business communication promotion and synergy are being dedicated as a prior target for all EU and EU member countries to be able to continue increasing employment, productivity, as well as social connections. The research of the enterprise and its stimulation in higher education (using higher education and business collaboration is not deep enough, although the enterprise’s multiple phenomenon were analysed from many perspectives. It is being planned to raise EU’s investments to youth much more compared to other main parts of the budget in 2014-2020. Analysis of European Union documents and Lithuanian case studies was chosen on purpose according to the enterprise’s created added value for European development. Realization, creativity, initiative, motivation, taking risks, planning and reaching personal goals are the main parts of the enterprise. Development of these skills in higher education is becoming very important because of “the advantage of the competitiveness is being determined by country‘s social education therefore the effective usage of human recourses is the most important part seeking to increase stabile economical and social well being.” Research of the EU’s and Lithuania’s national documentation and scientific literature review of entrepreneurship in higher education identifies the current enterprise position in education. According to the analyses of the documentation and the scientific literature review, the enterprise’s evaluation level was appointed. The new beginning of the enterprise in higher education is being started after the research was done and centrepiece’s promotion was critically evaluated in the EU and Lithuania. In October, 2011 the committee of the EU created a new work

  15. [Computerized image analysis applied to urology research].

    Science.gov (United States)

    Urrutia Avisrror, M

    1994-05-01

    Diagnosis with the aid of imaging techniques in urology had developed dramatically over the last few years as a result of using state-of-the-art technology that has added digital angiology to the last generation apparatus for ultrasound. Computerized axial tomography and nuclear magnetic resonance allow very high rates of diagnostic possibilities that only a decade ago were not extended to routine use. Each of these examination procedures has its own limits of sensitivity and specificity which vary as a function of the pathoanatomical characteristics depending on the condition to be explored, although none reaches yet absolute values. With ultrasound, CAT and NMR, identification of the various diseases rely on the analysis of densities although with a significant degree of the examiner's subjectivity in the diagnostic judgement. The logic evolution of these techniques is to eliminate such subjective component and translate the features which characterize each disease in quantifiable parameters, a challenge made feasible by computerized analysis. Thanks to technological advances in the field of microcomputers and the decreased cost of the equipment, currently it is possible for any clinical investigator with average resources to use the most sophisticated imaging analysis techniques for the post-processing of the images obtained, opening in the scope of practical investigation a pathway that just a few years ago was exclusive to only certain organizations due to the high cost involved.

  16. Novel block segmentation and processing for Chinese-English document

    Science.gov (United States)

    Chien, Bing-Shan; Jeng, Bor-Shenn; Sun, San-Wei; Chang, Gan-How; Shyu, Keh-Hwa; Shih, Chun-Hsi

    1991-11-01

    The block segmentation and block classification of digitized printed documents segmented into regions of texts, graphics, tables, and images are very important in automatic document analysis and understanding. Conventionally, the constrained run length algorithm (CRLA) has been proposed to segment digitized documents, however, it is space-consuming and time- consuming. The CRLA method must define some constrained parameters, so it cannot proceed automatically, and its performance may degrade significantly due to improper parameters. This paper proposes an efficient and effective method for document analysis, sequence connected segmentation and mapping matrix cell algorithm (SCSMMC). This method can analyze both simple and complex documents automatically and it need not define any constraint parameters. This method, which only needs one-reading image of document, can proceed completely and the techniques of segmentation, classification, labeling, and character segmentation proceed at the same time. The proposed document analysis method may also combine with the optical character recognizer to form an adaptive document understanding system.

  17. Three Years of Unmediated Document Delivery: An Analysis and Consideration of Collection Development Priorities.

    Science.gov (United States)

    Chan, Emily K; Mune, Christina; Wang, YiPing; Kendall, Susan L

    2016-01-01

    Like most academic libraries, San José State University Library is struggling to meet users' rising expectations for immediate information within the financial confines of a flat budget. To address acquisition of nonsubscribed article content, particularly outside of business hours, San José State University Library implemented Copyright Clearance Center's Get It Now, a document delivery service. Three academic years of analyzed data, which involves more than 10,000 requests, and the subsequent collection development actions taken by the library will be discussed. The value and challenges of patron-driven, unmediated document delivery services in conjunction with traditional document delivery services will be considered.

  18. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  19. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  20. Seismic analysis of the Nuclear Fuel Service Reprocessing Plant at West Valley, New York: documentation

    International Nuclear Information System (INIS)

    Murray, R.C.; Nelson, T.A.; Davito, A.M.

    1977-01-01

    This material was generated as part of a seismic case review of the NFS Reprocessing Plant. This study is documented in UCRL-52266. The material is divided into two parts: mathematical model information, and ultimate load calculations and comparisons

  1. What makes papers visible on social media? An analysis of various document characteristics

    OpenAIRE

    Zahedi, Zohreh; Costas, Rodrigo; Larivière, Vincent; Haustein, Stefanie

    2017-01-01

    In this study we have investigated the relationship between different document characteristics and the number of Mendeley readership counts, tweets, Facebook posts, mentions in blogs and mainstream media for 1.3 million papers published in journals covered by the Web of Science (WoS). It aims to demonstrate that how factors affecting various social media-based indicators differ from those influencing citations and which document types are more popular across different platforms. Our results h...

  2. Corporate ethical codes as strategic documents: An analysis of success and failure

    OpenAIRE

    Stevens, Betsy

    2009-01-01

    Ethical codes state the major philosophical principles and values in organizations and function as policy documents which define the responsibilities of organizations to stakeholders. They spell out the conduct expected of employees and articulate the acceptable ethical parameters of behavior in the organization. Most large US and multinational firms today have a code. If utilized effectively and embraced, codes can be key strategic documents in organizations for moderating emp...

  3. Analysis of Pregerminated Barley Using Hyperspectral Image Analysis

    DEFF Research Database (Denmark)

    Arngren, Morten; Hansen, Per Waaben; Eriksen, Birger

    2011-01-01

    imaging system in a mathematical modeling framework to identify pregerminated barley at an early stage of approximately 12 h of pregermination. Our model only assigns pregermination as the cause for a single kernel’s lack of germination and is unable to identify dormancy, kernel damage etc. The analysis...

  4. Documented Safety Analysis Addendum for the Neutron Radiography Reactor Facility Core Conversion

    Energy Technology Data Exchange (ETDEWEB)

    Boyd D. Christensen

    2009-05-01

    The Neutron Radiography Reactor Facility (NRAD) is a Training, Research, Isotope Production, General Atomics (TRIGA) reactor which was installed in the Idaho National Laboratory (INL) Hot Fuels Examination Facility (HFEF) at the Materials and Fuels Complex (MFC) in the mid 1970s. The facility provides researchers the capability to examine both irradiated and non-irradiated materials in support of reactor fuel and components programs through non-destructive neutron radiography examination. The facility has been used in the past as one facet of a suite of reactor fuels and component examination facilities available to researchers at the INL and throughout the DOE complex. The facility has also served various commercial research activities in addition to the DOE research and development support. The reactor was initially constructed using Fuel Lifetime Improvement Program (FLIP)- type highly enriched uranium (HEU) fuel obtained from the dismantled Puerto Rico Nuclear Center (PRNC) reactor. In accordance with international non-proliferation agreements, the NRAD core will be converted to a low enriched uranium (LEU) fuel and will continue to utilize the PRNC control rods, control rod drives, startup source, and instrument console as was previously used with the HEU core. The existing NRAD Safety Analysis Report (SAR) was created and maintained in the preferred format of the day, combining sections of both DOE-STD-3009 and Nuclear Regulatory Commission Regulatory Guide 1.70. An addendum was developed to cover the refueling and reactor operation with the LEU core. This addendum follows the existing SAR format combining required formats from both the DOE and NRC. This paper discusses the project to successfully write a compliant and approved addendum to the existing safety basis documents.

  5. The Intersectoral Collaboration Document for Cancer Risk Factors Reduction: Method and Stakeholder Analysis

    Directory of Open Access Journals (Sweden)

    Ali-Asghar Kolahi

    2016-03-01

    Full Text Available Background and Objective: Cancers are one of the most important public health issues and the third leading cause of mortality after cardiovascular diseases and injuries in Iran. The most common cancers reported in the recent years have been included skin, stomach, breast, colon, bladder, leukemia, and esophagus respectively. Control of cancer as one of the three main health system priorities of Iran, needs a specific roadmap and clear task definition for involved organizations. This study provides stakeholder analysis include determining the roles of Ministry of Health and Medical Education as the custodian of the national health and the duties of other beneficiary organizations to reduce the risk of cancer for cooperation with a scientific approach and systematic methodology.Materials and Methods: This health system research project was performed by participation of Social Determinants of Health Research Center of Shahid Beheshti University of Medical Sciences, Office of the Non-Communicable Diseases of the Ministry of Health and Medical Education and other stakeholders in 2013. At first, the strategic committee was established and the stakeholders were identified and analyzed. Then the quantitative data were collected by searching in national database concern incidence, prevalence, and burden of all types of cancers. At the last with the qualitative approach, a systematic review of the studies, documents and reports was conducted as well as conversing for the national strategic plans of Iran and other countries and the experts’ views regarding management of the cancer risk factors. In practice, role and responsibilities of each stakeholder were practically analyzed. Then the risk factors identified and the effective evidence-based interventions were determined for each cancer and finally the role of the Ministry of Health were set as the responsible or co-worker and also the role of the other organizations separately clarified in each

  6. Too much information? A document analysis of sport safety resources from key organisations

    Science.gov (United States)

    Finch, Caroline F

    2016-01-01

    Objectives The field of sport injury prevention has seen a marked increase in published research in recent years, with concomitant proliferation of lay sport safety resources, such as policies, fact sheets and posters. The aim of this study was to catalogue and categorise the number, type and topic focus of sport safety resources from a representative set of key organisations. Design Cataloguing and qualitative document analysis of resources available from the websites of six stakeholder organisations in Australia. Setting This study was part of a larger investigation, the National Guidance for Australian Football Partnerships and Safety (NoGAPS) project. Participants The NoGAPS study provided the context for a purposive sampling of six organisations involved in the promotion of safety in Australian football. These partners are recognised as being highly representative of organisations at national and state level that reflect similarly in their goals around sport safety promotion in Australia. Results The catalogue comprised 284 resources. More of the practical and less prescriptive types of resources, such as fact sheets, than formal policies were found. Resources for the prevention of physical injuries were the predominant sport safety issue addressed, with risk management, environmental issues and social behaviours comprising other categories. Duplication of resources for specific safety issues, within and across organisations, was found. Conclusions People working within sport settings have access to a proliferation of resources, which creates a potential rivalry for sourcing of injury prevention information. Important issues that are likely to influence the uptake of safety advice by the general sporting public include the sheer number of resources available, and the overlap and duplication of resources addressing the same issues. The existence of a large number of resources from reputable organisations does not mean that they are necessarily evidence based

  7. Clinical decision support improves quality of telephone triage documentation - an analysis of triage documentation before and after computerized clinical decision support

    Science.gov (United States)

    2014-01-01

    Background Clinical decision support (CDS) has been shown to be effective in improving medical safety and quality but there is little information on how telephone triage benefits from CDS. The aim of our study was to compare triage documentation quality associated with the use of a clinical decision support tool, ExpertRN©. Methods We examined 50 triage documents before and after a CDS tool was used in nursing triage. To control for the effects of CDS training we had an additional control group of triage documents created by nurses who were trained in the CDS tool, but who did not use it in selected notes. The CDS intervention cohort of triage notes was compared to both the pre-CDS notes and the CDS trained (but not using CDS) cohort. Cohorts were compared using the documentation standards of the American Academy of Ambulatory Care Nursing (AAACN). We also compared triage note content (documentation of associated positive and negative features relating to the symptoms, self-care instructions, and warning signs to watch for), and documentation defects pertinent to triage safety. Results Three of five AAACN documentation standards were significantly improved with CDS. There was a mean of 36.7 symptom features documented in triage notes for the CDS group but only 10.7 symptom features in the pre-CDS cohort (p triage note documentation quality. CDS-aided triage notes had significantly more information about symptoms, warning signs and self-care. The changes in triage documentation appeared to be the result of the CDS alone and not due to any CDS training that came with the CDS intervention. Although this study shows that CDS can improve documentation, further study is needed to determine if it results in improved care. PMID:24645674

  8. Nursing image: an evolutionary concept analysis.

    Science.gov (United States)

    Rezaei-Adaryani, Morteza; Salsali, Mahvash; Mohammadi, Eesa

    2012-12-01

    A long-term challenge to the nursing profession is the concept of image. In this study, we used the Rodgers' evolutionary concept analysis approach to analyze the concept of nursing image (NI). The aim of this concept analysis was to clarify the attributes, antecedents, consequences, and implications associated with the concept. We performed an integrative internet-based literature review to retrieve English literature published from 1980-2011. Findings showed that NI is a multidimensional, all-inclusive, paradoxical, dynamic, and complex concept. The media, invisibility, clothing style, nurses' behaviors, gender issues, and professional organizations are the most important antecedents of the concept. We found that NI is pivotal in staff recruitment and nursing shortage, resource allocation to nursing, nurses' job performance, workload, burnout and job dissatisfaction, violence against nurses, public trust, and salaries available to nurses. An in-depth understanding of the NI concept would assist nurses to eliminate negative stereotypes and build a more professional image for the nurse and the profession.

  9. Remote Sensing Digital Image Analysis An Introduction

    CERN Document Server

    Richards, John A

    2013-01-01

    Remote Sensing Digital Image Analysis provides the non-specialist with a treatment of the quantitative analysis of satellite and aircraft derived remotely sensed data. Since the first edition of the book there have been significant developments in the algorithms used for the processing and analysis of remote sensing imagery; nevertheless many of the fundamentals have substantially remained the same.  This new edition presents material that has retained value since those early days, along with new techniques that can be incorporated into an operational framework for the analysis of remote sensing data. The book is designed as a teaching text for the senior undergraduate and postgraduate student, and as a fundamental treatment for those engaged in research using digital image processing in remote sensing.  The presentation level is for the mathematical non-specialist.  Since the very great number of operational users of remote sensing come from the earth sciences communities, the text is pitched at a leve...

  10. Preprocessing with image denoising and histogram equalization for endoscopy image analysis using texture analysis.

    Science.gov (United States)

    Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki

    2015-08-01

    A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.

  11. Machine Learning Interface for Medical Image Analysis.

    Science.gov (United States)

    Zhang, Yi C; Kagen, Alexander C

    2017-10-01

    TensorFlow is a second-generation open-source machine learning software library with a built-in framework for implementing neural networks in wide variety of perceptual tasks. Although TensorFlow usage is well established with computer vision datasets, the TensorFlow interface with DICOM formats for medical imaging remains to be established. Our goal is to extend the TensorFlow API to accept raw DICOM images as input; 1513 DaTscan DICOM images were obtained from the Parkinson's Progression Markers Initiative (PPMI) database. DICOM pixel intensities were extracted and shaped into tensors, or n-dimensional arrays, to populate the training, validation, and test input datasets for machine learning. A simple neural network was constructed in TensorFlow to classify images into normal or Parkinson's disease groups. Training was executed over 1000 iterations for each cross-validation set. The gradient descent optimization and Adagrad optimization algorithms were used to minimize cross-entropy between the predicted and ground-truth labels. Cross-validation was performed ten times to produce a mean accuracy of 0.938 ± 0.047 (95 % CI 0.908-0.967). The mean sensitivity was 0.974 ± 0.043 (95 % CI 0.947-1.00) and mean specificity was 0.822 ± 0.207 (95 % CI 0.694-0.950). We extended the TensorFlow API to enable DICOM compatibility in the context of DaTscan image analysis. We implemented a neural network classifier that produces diagnostic accuracies on par with excellent results from previous machine learning models. These results indicate the potential role of TensorFlow as a useful adjunct diagnostic tool in the clinical setting.

  12. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    Science.gov (United States)

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid

  13. Analysis of physiotherapy documentation of patients′ records and discharge plans in a tertiary hospital

    Directory of Open Access Journals (Sweden)

    Olajide A Olawale

    2015-01-01

    Full Text Available Background and Objective: Accurate documentation promotes continuity of care and facilitates dissemination of information concerning the patient to all members of the health care team. This study was designed to analyze the pattern of physiotherapy documentation of the patients' records and discharge plans in a tertiary hospital in Lagos, Nigeria. Materials and Methods: A total of 503 case files from the four units of the Physiotherapy Department of the hospital were examined for accuracy of records. The D-Catch instrument was used to quantify the accuracy of record structure, admission data, physiotherapy examination, physiotherapy diagnosis, patients' prognoses based on the plan of care, physiotherapy intervention, progress and outcome evaluation, legibility, and discharge/discontinuation plan. Results: “Accuracy of legibility” domain had the highest accuracy score: 401 (79.72% case files had an accuracy score of 4. The domain “accuracy of the discharge/discontinuation summary” had the lowest accuracy score: 502 (99.80% case files had an accuracy score of 1. Conclusion: Documentation of the plan of care made in the hospital for the period of this study did not fully conform to the guidelines of the World Confederation for Physical Therapy (WCPT. The accuracy of physiotherapy documentation needs to be improved in order to promote optimal continuity of care, improve efficiency and quality of care, and recognize patients' needs. Implementation and use of electronically produced documentation might help physiotherapists to organize their notes more accurately.

  14. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  15. Attenuated total reflectance Fourier transform infrared spectroscopy analysis of red seal inks on questioned document.

    Science.gov (United States)

    Nam, Yun Sik; Park, Jin Sook; Kim, Nak-Kyoon; Lee, Yeonhee; Lee, Kang-Bong

    2014-07-01

    Seals are traditionally used in the Far East Asia to stamp an impression on a document in place of a signature. In this study, an accuser claimed that a personal contract regarding mining development rights acquired by a defendant was devolved to the accuser because the defendant stamped the devolvement contract in the presence of the accuser and a witness. The accuser further stated that the seal ink stamped on the devolvement contract was the same as that stamped on the development rights application document. To verify this, the seals used in two documents were analyzed using micro-attenuated total reflectance Fourier transform infrared spectroscopy and infrared spectra. The findings revealed that the seals originated from different manufacturers. Thus, the accuser's claim on the existence of a devolvement contract was proved to be false. © 2014 American Academy of Forensic Sciences.

  16. Market Analysis and Consumer Impacts Source Document. Part I. The Motor Vehicle Market in the Late 1970's

    Science.gov (United States)

    1980-12-01

    The source document on motor vehicle market analysis and consumer impact consists of three parts. Part I is an integrated overview of the motor vehicle market in the late 1970's, with sections on the structure of the market, motor vehicle trends, con...

  17. "I Like to Plan Events": A Document Analysis of Essays Written by Applicants to a Public Relations Program

    Science.gov (United States)

    Taylor, Ronald E.

    2016-01-01

    A document analysis of 249 essays written during a 5-year period by applicants to a public relations program at a major state university in the southeast suggests that there are enduring reasons why students choose to major in public relations. Public relations is described as a major that allows for and encourages creative expression and that…

  18. Ethics, Power, Internationalisation and the Postcolonial: A Foucauldian Discourse Analysis of Policy Documents in Two Scottish Universities

    Science.gov (United States)

    Guion Akdag, Emma; Swanson, Dalene M.

    2018-01-01

    This paper provides a critical discussion of internationalisation in Higher Education (HE), and exemplifies a process of uncovering the investments in power and ideology through the partial analysis of four strategic internationalisation documents at two Scottish Higher Education institutions, as part of an ongoing international study into the…

  19. Fernando Pessoa and Aleister Crowley: new discoveries and a new analysis of the documents in the Gerald Yorke Collection

    NARCIS (Netherlands)

    Pasi, M.; Ferrari, P.

    2012-01-01

    The documents concerning the relationship between Fernando Pessoa and Aleister Crowley preserved in the Yorke Collection at the Warburg Institute (London) have been known for some time. However, recent new findings have prompted a new analysis of the dossier. The purpose of this article is to have a

  20. What makes papers visible on social media? An analysis of various document characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Zahedi, Z.; Costas, R.; Lariviere, V.; Haustein, S.

    2016-07-01

    In this study we have investigated the relationship between different document characteristics and the number of Mendeley readership counts, tweets, Facebook posts, mentions in blogs and mainstream media for 1.3 million papers published in journals covered by the Web of Science (WoS). It aims to demonstrate that how factors affecting various social media-based indicators differ from those influencing citations and which document types are more popular across different platforms. Our results highlight the heterogeneous nature of altmetrics, which encompasses different types of uses and user groups engaging with research on social media. (Author)

  1. Neutron structure analysis using neutron imaging plate

    International Nuclear Information System (INIS)

    Karasawa, Yuko; Minezaki, Yoshiaki; Niimura, Nobuo

    1997-01-01

    Neutron is complementary against X-ray and is dispensable for structure analysis. However, because of the lack of the neutron intensity, it was not so common as X-ray. In order to overcome the intensity problem, a neutron imaging plate (NIP) has been successfully developed. The NIP has opened the door of neutron structure biology, where all the hydrogen atoms and bound water molecules of protein are determined, and contributed to development of other fields such as neutron powder diffraction and neutron radiography, too. (author)

  2. Integrated system for automated financial document processing

    Science.gov (United States)

    Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai

    1997-02-01

    A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.

  3. Image reconstruction from Pulsed Fast Neutron Analysis

    International Nuclear Information System (INIS)

    Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob

    1999-01-01

    Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors

  4. Sparse Superpixel Unmixing for Hyperspectral Image Analysis

    Science.gov (United States)

    Castano, Rebecca; Thompson, David R.; Gilmore, Martha

    2010-01-01

    Software was developed that automatically detects minerals that are present in each pixel of a hyperspectral image. An algorithm based on sparse spectral unmixing with Bayesian Positive Source Separation is used to produce mineral abundance maps from hyperspectral images. A superpixel segmentation strategy enables efficient unmixing in an interactive session. The algorithm computes statistically likely combinations of constituents based on a set of possible constituent minerals whose abundances are uncertain. A library of source spectra from laboratory experiments or previous remote observations is used. A superpixel segmentation strategy improves analysis time by orders of magnitude, permitting incorporation into an interactive user session (see figure). Mineralogical search strategies can be categorized as supervised or unsupervised. Supervised methods use a detection function, developed on previous data by hand or statistical techniques, to identify one or more specific target signals. Purely unsupervised results are not always physically meaningful, and may ignore subtle or localized mineralogy since they aim to minimize reconstruction error over the entire image. This algorithm offers advantages of both methods, providing meaningful physical interpretations and sensitivity to subtle or unexpected minerals.

  5. Imaging radionuclide analysis apparatus and method

    International Nuclear Information System (INIS)

    Fleming, R.H.

    1993-01-01

    Imaging neutron activation analysis apparatus is described comprising: a vacuum chamber, means for positioning a sample in said vacuum chamber, means for irradiating the sample with neutrons, means for detecting the time when and the energy of gamma rays emitted from the sample and for establishing from the detected gamma ray energies the presence of certain elements in the sample, means for detecting when delayed beta-electrons are emitted from the sample and for imaging the location on the sample from which such delayed beta-electrons are emitted, means for determining time coincidence between detection of gamma rays by said gamma ray detecting means and detection of electrons by said delayed beta-electron detecting means and means for establishing the location of certain elements on the sample from determined coincidence of detected gamma rays and detected delayed beta-electrons and the established gamma ray energies and the image of the location on the sample from which such delayed beta-electrons are emitted

  6. Analysis of image plane's Illumination in Image-forming System

    International Nuclear Information System (INIS)

    Duan Lihua; Zeng Yan'an; Zhang Nanyangsheng; Wang Zhiguo; Yin Shiliang

    2011-01-01

    In the detection of optical radiation, the detecting accuracy is affected by optic power distribution of the detector's surface to a large extent. In addition, in the image-forming system, the quality of the image is greatly determined by the uniformity of the image's illumination distribution. However, in the practical optical system, affected by the factors such as field of view, false light and off axis and so on, the distribution of the image's illumination tends to be non uniform, so it is necessary to discuss the image plane's illumination in image-forming systems. In order to analyze the characteristics of the image-forming system at a full range, on the basis of photometry, the formulas to calculate the illumination of the imaging plane have been summarized by the numbers. Moreover, the relationship between the horizontal offset of the light source and the illumination of the image has been discussed in detail. After that, the influence of some key factors such as aperture angle, off-axis distance and horizontal offset on illumination of the image has been brought forward. Through numerical simulation, various theoretical curves of those key factors have been given. The results of the numerical simulation show that it is recommended to aggrandize the diameter of the exit pupil to increase the illumination of the image. The angle of view plays a negative role in the illumination distribution of the image, that is, the uniformity of the illumination distribution can be enhanced by compressing the angle of view. Lastly, it is proved that telecentric optical design is an effective way to advance the uniformity of the illumination distribution.

  7. Documenting the biodiversity of the Madrean Archipelago: An analysis of a virtual flora and fauna

    Science.gov (United States)

    Nicholas S. Deyo; Thomas R. Van Devender; Alex Smith; Edward. Gilbert

    2013-01-01

    The Madrean Archipelago Biodiversity Assessment (MABA) of Sky Island Alliance is an ambitious project to document the distributions of all species of animals and plants in the Madrean Archipelago, focusing particularly on northeastern Sonora and northwestern Chihuahua, Mexico. The information is made available through MABA’s online database (madrean.org). The sources...

  8. Internationalization Impact on PhD Training Policy in Russia: Insights from the Comparative Document Analysis

    Science.gov (United States)

    Chigisheva, Oksana; Soltovets, Elena; Bondarenko, Anna

    2017-01-01

    The relevance of the study is due to the need for an objective picture of the Russian third level tertiary education transformation driven by internationalization issues and global trends in education. The article provides an analytical comparative review of the official documents related to the main phases of education reform in Russia and…

  9. ANALYSIS OF TERRESTRIAL LASER SCANNING AND PHOTOGRAMMETRY DATA FOR DOCUMENTATION OF HISTORICAL ARTIFACTS

    Directory of Open Access Journals (Sweden)

    R. A. Kuçak

    2016-10-01

    Full Text Available Historical artifacts living from the past until today exposed to many destructions non-naturally or naturally. For this reason, The protection and documentation studies of Cultural Heritage to inform the next generations are accelerating day by day in the whole world. The preservation of historical artifacts using advanced 3D measurement technologies becomes an efficient tool for mapping solutions. There are many methods for documentation and restoration of historic structures. In addition to traditional methods such as simple hand measurement and tachometry, terrestrial laser scanning is rapidly becoming one of the most commonly used techniques due to its completeness, accuracy and fastness characteristics. This study evaluates terrestrial laser scanning(TLS technology and photogrammetry for documenting the historical artifacts facade data in 3D Environment. PhotoModeler software developed by Eos System was preferred for Photogrammetric method. Leica HDS 6000 laser scanner developed by Leica Geosystems and Cyclone software which is the laser data evaluation software belonging to the company is preferred for Terrestrial Laser Scanning method. Taking into account the results obtained with this software product is intended to provide a contribution to the studies for the documentation of cultural heritage.

  10. Identifying Areas of Potential Wetland Hydrology in Irrigated Croplands Using Aerial Image Interpretation and Analysis of Rainfall Normality

    Science.gov (United States)

    2016-06-01

    16-6 5 signatures that can be detected on aerial images , including crop stress, areas of a cropped field that appear to be drowned out, areas not...with the majority of ERDC/EL TR-16-6 20 images collected during the dry season when cloudy days are infrequent and crops have been planted . As...Developing composite images can further assist with documentation. Analysis of rainfall normality can further refine the process , providing context

  11. [A novel image processing and analysis system for medical images based on IDL language].

    Science.gov (United States)

    Tang, Min

    2009-08-01

    Medical image processing and analysis system, which is of great value in medical research and clinical diagnosis, has been a focal field in recent years. Interactive data language (IDL) has a vast library of built-in math, statistics, image analysis and information processing routines, therefore, it has become an ideal software for interactive analysis and visualization of two-dimensional and three-dimensional scientific datasets. The methodology is proposed to design a novel image processing and analysis system for medical images based on IDL. There are five functional modules in this system: Image Preprocessing, Image Segmentation, Image Reconstruction, Image Measurement and Image Management. Experimental results demonstrate that this system is effective and efficient, and it has the advantages of extensive applicability, friendly interaction, convenient extension and favorable transplantation.

  12. An expert image analysis system for chromosome analysis application

    International Nuclear Information System (INIS)

    Wu, Q.; Suetens, P.; Oosterlinck, A.; Van den Berghe, H.

    1987-01-01

    This paper reports a recent study on applying a knowledge-based system approach as a new attempt to solve the problem of chromosome classification. A theoretical framework of an expert image analysis system is proposed, based on such a study. In this scheme, chromosome classification can be carried out under a hypothesize-and-verify paradigm, by integrating a rule-based component, in which the expertise of chromosome karyotyping is formulated with an existing image analysis system which uses conventional pattern recognition techniques. Results from the existing system can be used to bring in hypotheses, and with the rule-based verification and modification procedures, improvement of the classification performance can be excepted

  13. Etching and image analysis of the microstructure in marble

    DEFF Research Database (Denmark)

    Alm, Ditte; Brix, Susanne; Howe-Rasmussen, Helle

    2005-01-01

    of grains exposed on that surface are measured on the microscope images using image analysis by the program Adobe Photoshop 7.0 with Image Processing Toolkit 4.0. The parameters measured by the program on microscope images of thin sections of two marble types are used for calculation of the coefficient...

  14. Searching for text documents

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Blanken, Henk; de Vries, A.P.; Blok, H.E.; Feng, L.

    2007-01-01

    Many documents contain, besides text, also images, tables, and so on. This chapter concentrates on the text part only. Traditionally, systems handling text documents are called information storage and retrieval systems. Before the World-Wide Web emerged, such systems were almost exclusively used by

  15. Guidance Document - Provision of Outage Reserve Capacity for Molybdenum-99 Irradiation Services: Methodology and Economic Analysis

    International Nuclear Information System (INIS)

    Peykov, Pavel; Cameron, Ron; Westmacott, Chad

    2013-01-01

    In June 2011, the OECD Nuclear Energy Agency's (NEA) High-level Group on the Security of Supply of Medical Radioisotopes (HLG-MR) released its policy approach for ensuring a long-term secure supply of molybdenum-99 ( 99 Mo) and its decay product technetium-99m (' 99m Tc). This policy approach was developed after two years of extensive examination and analysis of the challenges facing the supply chain, and the provision of a reliable, secure supply of these important medical isotopes. The full policy approach can be found in the OECD/NEA report, The Supply of Medical Radioisotopes: The Path to Reliability (NEA, 2011). One of the key principles in the policy approach relates to the provision of outage reserve capacity (ORC) in the 99 Mo/' 99m Tc supply chain, as defined on page 7: 'Principle 2: Reserve capacity should be sourced and paid for by the supply chain. A common approach should be used to determine the amount of reserve capacity required'. This Principle follows the findings of the OECD/NEA report, The Supply of Medical Radioisotopes: An Economic Study of the Molybdenum-99 Supply Chain (NEA, 2010), which clearly demonstrated the need for excess 99 Mo production capacity, relative to demand, as some reactors may have to be shutdown unexpectedly or for extended periods. The Study also demonstrated that the pricing structure from reactors for 99 Mo irradiation services prior to the 2009-10 supply shortage was not economically sustainable, including the pricing of ORC, with the cost being subsidised by host nations. These nations have indicated a move away from subsidising production, which often benefits foreign nations or foreign companies, and therefore pricing for irradiation services must recover the full cost of production to ensure economic sustainability and a long-term secure supply. Appropriate pricing would also encourage more efficient use of the product, reducing inefficient use of 99 Mo/' 99m Tc would reduce excess production and the associated

  16. Status of High Flux Isotope Reactor (HFIR) post-restart safety analysis and documentation upgrades

    International Nuclear Information System (INIS)

    Cook, D.H.; Radcliff, T.D.; Rothrock, R.B.; Schreiber, R.E.

    1990-01-01

    The High Flux Isotope Reactor (HFIR), an experimental reactor located at the Oak Ridge National Laboratory (ORNL) and operated for the US Department of Energy by Martin Marietta Energy Systems, was shut down in November, 1986 after the discovery of unexpected neutron embrittlement of the reactor vessel. The reactor was restarted in April, 1989, following an extensive review by DOE and ORNL of the HFIR design, safety, operation, maintenance and management, and the implementation of several upgrades to HFIR safety-related hardware, analyses, documents and procedures. This included establishing new operating conditions to provide added margin against pressure vessel failure, as well as the addition, or upgrading, of specific safety-related hardware. This paper summarizes the status of some of the follow-on (post-restart) activities which are currently in progress, and which will result in a comprehensive set of safety analyses and documentation for the HFIR, comparable with current practice in commercial nuclear power plants. 8 refs

  17. From Pixels to Geographic Objects in Remote Sensing Image Analysis

    NARCIS (Netherlands)

    Addink, E.A.; Van Coillie, Frieke M.B.; Jong, Steven M. de

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received

  18. Electronic document management system analysis report and system plan for the Environmental Restoration Program

    International Nuclear Information System (INIS)

    Frappaolo, C.

    1995-09-01

    Lockheed Martin Energy Systems, Inc. (LMES) has established and maintains Document Management Centers (DMCs) to support Environmental Restoration Program (ER) activities undertaken at three Oak Ridge facilities: Oak Ridge National Laboratory, Oak Ridge K-25 Site, Oak Ridge Y-12 Plant; and two sister sites: Portsmouth Gaseous Diffusion Plant in Portsmouth, Ohio, and Paducah Gaseous Diffusion Plant in Paducah, Kentucky. The role of the DMCs is to receive, store, retrieve, and properly dispose of records. In an effort to make the DMCs run more efficiently and to more proactively manage the records' life cycles from cradle to grave, ER has decided to investigate ways in which Electronic Document Management System (EDMS) technologies can be used to redefine the DMCs and their related processes. Specific goals of this study are tightening control over the ER documents, establishing and enforcing record creation and retention procedures, speeding up access to information, and increasing the accessibility of information. A working pilot of the solution is desired within the next six months. Based on a series of interviews conducted with personnel from each of the DMCs, key management, and individuals representing related projects, it is recommended that ER utilize document management, full-text retrieval, and workflow technologies to improve and automate records management for the ER program. A phased approach to solution implementation is suggested starting with the deployment of an automated storage and retrieval system at Portsmouth. This should be followed with a roll out of the system to the other DMCs, the deployment of a workflow-enabled authoring system at Portsmouth, and a subsequent roll out of this authoring system to the other sites

  19. Clearing of psoriasis documented by laser Doppler perfusion imaging contrasts remaining elevation of dermal expression levels of CD31

    NARCIS (Netherlands)

    Hendriks, A.G.M.; Kerkhof, P.C.M. van de; Jonge, C.S. de; Lucas, M.; Steenbergen, W.; Seyger, M.M.B.

    2015-01-01

    BACKGROUND: Vascular modifications represent a key feature in psoriatic plaques. Previous research with Laser Doppler Perfusion Imaging (LDPI) revealed a remarkable heterogeneity in the cutaneous perfusion within homogenous-appearing psoriatic lesions. Insights in the relation between perfusion

  20. Documents preparation and review

    International Nuclear Information System (INIS)

    1999-01-01

    Ignalina Safety Analysis Group takes active role in assisting regulatory body VATESI to prepare various regulatory documents and reviewing safety reports and other documentation presented by Ignalina NPP in the process of licensing of unit 1. The list of main documents prepared and reviewed is presented

  1. An image scanner for real time analysis of spark chamber images

    International Nuclear Information System (INIS)

    Cesaroni, F.; Penso, G.; Locci, A.M.; Spano, M.A.

    1975-01-01

    The notes describes the semiautomatic scanning system at LNF for the analysis of spark chamber images. From the projection of the images on the scanner table, the trajectory in the real space is reconstructed

  2. Direct identification of fungi using image analysis

    DEFF Research Database (Denmark)

    Dørge, Thorsten Carlheim; Carstensen, Jens Michael; Frisvad, Jens Christian

    1999-01-01

    Filamentous fungi have often been characterized, classified or identified with a major emphasis on macromorphological characters, i.e. the size, texture and color of fungal colonies grown on one or more identification media. This approach has been rejcted by several taxonomists because of the sub......Filamentous fungi have often been characterized, classified or identified with a major emphasis on macromorphological characters, i.e. the size, texture and color of fungal colonies grown on one or more identification media. This approach has been rejcted by several taxonomists because...... of the subjectivity in the visual evaluation and quantification (if any)of such characters and the apparent large variability of the features. We present an image analysis approach for objective identification and classification of fungi. The approach is exemplified by several isolates of nine different species...... of the genus Penicillium, known to be very difficult to identify correctly. The fungi were incubated on YES and CYA for one week at 25 C (3 point inoculation) in 9 cm Petri dishes. The cultures are placed under a camera where a digital image of the front of the colonies is acquired under optimal illumination...

  3. MORPHOLOGICAL GRANULOMETRIC ANALYSIS OF SEDIMENT IMAGES

    Directory of Open Access Journals (Sweden)

    Yoganand Balagurunathan

    2011-05-01

    Full Text Available Sediments are routinely analyzed in terms of the sizing characteristics of the grains of which they are composed. Via sieving methods, the grains are separated and a weight-based size distribution constructed. Various moment parameters are computed from the size distribution and these serve as sediment characteristics. This paper examines the feasibility of a fully electronic granularity analysis using digital image processing. The study uses a random model of three-dimensional grains in conjunction with the morphological method of granulometric size distributions. The random model is constructed to simulate sand, silt, and clay particle distributions. Owing to the impossibility of perfectly sifting small grains so that they do not touch, the model is used in both disjoint and non-disjoint modes, and watershed segmentation is applied in the non-disjoint model. The image-based granulometric size distributions are transformed so that they take into account the necessity to view sediment fractions at different magnifications and in different frames. Gray-scale granulometric moments are then computed using both ordinary and reconstructive granulometries. The resulting moments are then compared to moments found from real grains in seven different sediments using standard weight-based size distributions.

  4. Vector processing enhancements for real-time image analysis

    International Nuclear Information System (INIS)

    Shoaf, S.

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  5. FOOD IMAGE ANALYSIS: SEGMENTATION, IDENTIFICATION AND WEIGHT ESTIMATION

    OpenAIRE

    He, Ye; Xu, Chang; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2013-01-01

    We are developing a dietary assessment system that records daily food intake through the use of food images taken at a meal. The food images are then analyzed to extract the nutrient content in the food. In this paper, we describe the image analysis tools to determine the regions where a particular food is located (image segmentation), identify the food type (feature classification) and estimate the weight of the food item (weight estimation). An image segmentation and classification system i...

  6. DOCUMENTING A COMPLEX MODERN HERITAGE BUILDING USING MULTI IMAGE CLOSE RANGE PHOTOGRAMMETRY AND 3D LASER SCANNED POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    M. L. Vianna Baptista

    2013-07-01

    Full Text Available Integrating different technologies and expertises help fill gaps when optimizing documentation of complex buildings. Described below is the process used in the first part of a restoration project, the architectural survey of Theatre Guaira Cultural Centre in Curitiba, Brazil. To diminish time on fieldwork, the two-person-field-survey team had to juggle, during three days, the continuous artistic activities and performers’ intense schedule. Both technologies (high definition laser scanning and close-range photogrammetry were used to record all details in the least amount of time without disturbing the artists' rehearsals and performances. Laser Scanning was ideal to record the monumental stage structure with all of its existing platforms, light fixtures, scenery walls and curtains. Although scanned with high-definition, parts of the exterior façades were also recorded using Close Range Photogrammetry. Tiny cracks on the marble plaques and mosaic tiles, not visible in the point clouds, were then able to be precisely documented in order to create the exterior façades textures and damages mapping drawings. The combination of technologies and the expertise of service providers, knowing how and what to document, and what to deliver to the client, enabled maximum benefits to the following restoration project.

  7. Quantitative MR Image Analysis for Brian Tumor.

    Science.gov (United States)

    Shboul, Zeina A; Reza, Sayed M S; Iftekharuddin, Khan M

    2018-01-01

    This paper presents an integrated quantitative MR image analysis framework to include all necessary steps such as MRI inhomogeneity correction, feature extraction, multiclass feature selection and multimodality abnormal brain tissue segmentation respectively. We first obtain mathematical algorithm to compute a novel Generalized multifractional Brownian motion (GmBm) texture feature. We then demonstrate efficacy of multiple multiresolution texture features including regular fractal dimension (FD) texture, and stochastic texture such as multifractional Brownian motion (mBm) and GmBm features for robust tumor and other abnormal tissue segmentation in brain MRI. We evaluate these texture and associated intensity features to effectively delineate multiple abnormal tissues within and around the tumor core, and stroke lesions using large scale public and private datasets.

  8. Biostatistical analysis of quantitative immunofluorescence microscopy images.

    Science.gov (United States)

    Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C

    2016-12-01

    Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  9. Computerised image analysis of biocrystallograms originating from agricultural products

    DEFF Research Database (Denmark)

    Andersen, Jens-Otto; Henriksen, Christian B.; Laursen, J.

    1999-01-01

    Procedures are presented for computerised image analysis of iocrystallogram images, originating from biocrystallization investigations of agricultural products. The biocrystallization method is based on the crystallographic phenomenon that when adding biological substances, such as plant extracts...... on up to eight parameters indicated strong relationships, with R2 up to 0.98. It is concluded that the procedures were able to discriminate the seven groups of images, and are applicable for biocrystallization investigations of agricultural products. Perspectives for the application of image analysis...

  10. Comprehensive Non-Destructive Conservation Documentation of Lunar Samples Using High-Resolution Image-Based 3D Reconstructions and X-Ray CT Data

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.

    2015-01-01

    Established contemporary conservation methods within the fields of Natural and Cultural Heritage encourage an interdisciplinary approach to preservation of heritage material (both tangible and intangible) that holds "Outstanding Universal Value" for our global community. NASA's lunar samples were acquired from the moon for the primary purpose of intensive scientific investigation. These samples, however, also invoke cultural significance, as evidenced by the millions of people per year that visit lunar displays in museums and heritage centers around the world. Being both scientifically and culturally significant, the lunar samples require a unique conservation approach. Government mandate dictates that NASA's Astromaterials Acquisition and Curation Office develop and maintain protocols for "documentation, preservation, preparation and distribution of samples for research, education and public outreach" for both current and future collections of astromaterials. Documentation, considered the first stage within the conservation methodology, has evolved many new techniques since curation protocols for the lunar samples were first implemented, and the development of new documentation strategies for current and future astromaterials is beneficial to keeping curation protocols up to date. We have developed and tested a comprehensive non-destructive documentation technique using high-resolution image-based 3D reconstruction and X-ray CT (XCT) data in order to create interactive 3D models of lunar samples that would ultimately be served to both researchers and the public. These data enhance preliminary scientific investigations including targeted sample requests, and also provide a new visual platform for the public to experience and interact with the lunar samples. We intend to serve these data as they are acquired on NASA's Astromaterials Acquisistion and Curation website at http://curator.jsc.nasa.gov/. Providing 3D interior and exterior documentation of astromaterial

  11. Forensic image analysis - CCTV distortion and artefacts.

    Science.gov (United States)

    Seckiner, Dilan; Mallett, Xanthé; Roux, Claude; Meuwly, Didier; Maynard, Philip

    2018-04-01

    As a result of the worldwide deployment of surveillance cameras, authorities have gained a powerful tool that captures footage of activities of people in public areas. Surveillance cameras allow continuous monitoring of the area and allow footage to be obtained for later use, if a criminal or other act of interest occurs. Following this, a forensic practitioner, or expert witness can be required to analyse the footage of the Person of Interest. The examination ultimately aims at evaluating the strength of evidence at source and activity levels. In this paper, both source and activity levels are inferred from the trace, obtained in the form of CCTV footage. The source level alludes to features observed within the anatomy and gait of an individual, whilst the activity level relates to activity undertaken by the individual within the footage. The strength of evidence depends on the value of the information recorded, where the activity level is robust, yet source level requires further development. It is therefore suggested that the camera and the associated distortions should be assessed first and foremost and, where possible, quantified, to determine the level of each type of distortion present within the footage. A review of the 'forensic image analysis' review is presented here. It will outline the image distortion types and detail the limitations of differing surveillance camera systems. The aim is to highlight various types of distortion present particularly from surveillance footage, as well as address gaps in current literature in relation to assessment of CCTV distortions in tandem with gait analysis. Future work will consider the anatomical assessment from surveillance footage. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. The architectural plan and the moving image. Audio-visual media for the plan: to document, to present, to diffuse.

    Directory of Open Access Journals (Sweden)

    Maria Letizia Gagliardi

    2009-06-01

    Full Text Available The architecture needs to be communicated and to communicate, that’s why, in every period, the architect has used innovative tools which could “make public” his work. From the convention of the architectural drawing, to the prospective, photography, cinema, computer and tele-vision, architecture’s communication has found in the dynamic image the right tool for the representation of the relation between space, time and human being, a relation which implies contrasts and framings, that is a succession of images. In this article we will identify three different ways of telling architecture through moving images, three narrations that correspond to three different techniques of planning, shooting and post-production: the documentary, the simulation and the tele-vision.

  13. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  14. Meaningful participation for children in the Dutch child protection system: A critical analysis of relevant provisions in policy documents.

    Science.gov (United States)

    Bouma, Helen; López López, Mónica; Knorth, Erik J; Grietens, Hans

    2018-05-01

    Policymakers are increasingly focusing on the participation of children in the child protection system (CPS). However, research shows that actual practice still needs to be improved. Embedding children's participation in legislation and policy documents is one important prerequisite for achieving meaningful participation in child protection practice. In this study, the participation of children in the Dutch CPS under the new Youth Act 2015 is critically analyzed. National legislation and policy documents were studied using a model of "meaningful participation" based on article 12 of the UNCRC. Results show that the idea of children's participation is deeply embedded in the current Dutch CPS. However, Dutch policy documents do not fully cover the three dimensions of what is considered to be meaningful participation for children: informing, hearing, and involving. Furthermore, children's participation differs among the organizations included in the child protection chain. A clear overall policy concerning the participation of children in the Dutch CPS is lacking. The conclusions of this critical analysis of policy documents and the framework of meaningful participation presented may provide a basis for the embedding of meaningful participation for children in child protection systems of other countries. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. PHOTOGRAMMETRIC ANALYSIS OF HISTORICAL IMAGE REPOSITORIES FOR VIRTUAL RECONSTRUCTION IN THE FIELD OF DIGITAL HUMANITIES

    Directory of Open Access Journals (Sweden)

    F. Maiwald

    2017-02-01

    Full Text Available Historical photographs contain high density of information and are of great importance as sources in humanities research. In addition to the semantic indexing of historical images based on metadata, it is also possible to reconstruct geometric information about the depicted objects or the camera position at the time of the recording by employing photogrammetric methods. The approach presented here is intended to investigate (semi- automated photogrammetric reconstruction methods for heterogeneous collections of historical (city photographs and photographic documentation for the use in the humanities, urban research and history sciences. From a photogrammetric point of view, these images are mostly digitized photographs. For a photogrammetric evaluation, therefore, the characteristics of scanned analog images with mostly unknown camera geometry, missing or minimal object information and low radiometric and geometric resolution have to be considered. In addition, these photographs have not been created specifically for documentation purposes and so the focus of these images is often not on the object to be evaluated. The image repositories must therefore be subjected to a preprocessing analysis of their photogrammetric usability. Investigations are carried out on the basis of a repository containing historical images of the Kronentor ("crown gate" of the Dresden Zwinger. The initial step was to assess the quality and condition of available images determining their appropriateness for generating three-dimensional point clouds from historical photos using a structure-from-motion evaluation (SfM. Then, the generated point clouds were assessed by comparing them with current measurement data of the same object.

  16. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  17. Machine learning based analysis of cardiovascular images

    NARCIS (Netherlands)

    Wolterink, JM

    2017-01-01

    Cardiovascular diseases (CVDs), including coronary artery disease (CAD) and congenital heart disease (CHD) are the global leading cause of death. Computed tomography (CT) and magnetic resonance imaging (MRI) allow non-invasive imaging of cardiovascular structures. This thesis presents machine

  18. Analysis of compatibility of current Czech initial documentation in the area of technical assurance of nuclear safety with the requirements of the EUR document

    International Nuclear Information System (INIS)

    Zdebor, J.; Zdebor, R.; Kratochvil, L.

    2001-11-01

    The publication is structured as follows: Description of existing documentation. General requirements, goals, principles and design principles: Documents being compared; Method of comparison; Results and partial evaluation of comparison of requirements between EUR and Czech regulations (basic goals and safety philosophy; quantitative safety objectives; basic design requirements; extended design requirements; external and internal threats; technical requirements; site conditions); Summary of the comparison of safety requirements. Comparison of requirements for the systems: Requirements for the nuclear reactor unit systems; Barrier systems (fuel system; reactor cooling system; containment system); Remaining systems (control systems; protection systems; coolant makeup and purification system; residual heat removal system; emergency cooling system; power systems); Common technical requirements for systems (technical requirements for systems; internal and external events). (P.A.)

  19. Image quality analysis of digital mammographic equipments

    International Nuclear Information System (INIS)

    Mayo, P.; Pascual, A.; Verdu, G.; Rodenas, F.; Campayo, J.M.; Villaescusa, J.I.

    2006-01-01

    The image quality assessment of a radiographic phantom image is one of the fundamental points in a complete quality control programme. The good functioning result of all the process must be an image with an appropriate quality to carry out a suitable diagnostic. Nowadays, the digital radiographic equipments are replacing the traditional film-screen equipments and it is necessary to update the parameters to guarantee the quality of the process. Contrast-detail phantoms are applied to digital radiography to study the threshold contrast detail sensitivity at operation conditions of the equipment. The phantom that is studied in this work is C.D.M.A.M. 3.4, which facilitates the evaluation of image contrast and detail resolution. One of the most extended indexes to measure the image quality in an objective way is the Image Quality Figure (I.Q.F.). This parameter is useful to calculate the image quality taking into account the contrast and detail resolution of the image analysed. The contrast-detail curve is useful as a measure of the image quality too, because it is a graphical representation in which the hole thickness and diameter are plotted for each contrast-detail combination detected in the radiographic image of the phantom. It is useful for the comparison of the functioning of different radiographic image systems, for phantom images under the same exposition conditions. The aim of this work is to study the image quality of different images contrast-detail phantom C.D.M.A.M. 3.4, carrying out the automatic detection of the contrast-detail combination and to establish a parameter which characterize in an objective way the mammographic image quality. This is useful to compare images obtained at different digital mammographic equipments to study the functioning of the equipments. (authors)

  20. Image quality analysis of digital mammographic equipments

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, P.; Pascual, A.; Verdu, G. [Valencia Univ. Politecnica, Chemical and Nuclear Engineering Dept. (Spain); Rodenas, F. [Valencia Univ. Politecnica, Applied Mathematical Dept. (Spain); Campayo, J.M. [Valencia Univ. Hospital Clinico, Servicio de Radiofisica y Proteccion Radiologica (Spain); Villaescusa, J.I. [Hospital Clinico La Fe, Servicio de Proteccion Radiologica, Valencia (Spain)

    2006-07-01

    The image quality assessment of a radiographic phantom image is one of the fundamental points in a complete quality control programme. The good functioning result of all the process must be an image with an appropriate quality to carry out a suitable diagnostic. Nowadays, the digital radiographic equipments are replacing the traditional film-screen equipments and it is necessary to update the parameters to guarantee the quality of the process. Contrast-detail phantoms are applied to digital radiography to study the threshold contrast detail sensitivity at operation conditions of the equipment. The phantom that is studied in this work is C.D.M.A.M. 3.4, which facilitates the evaluation of image contrast and detail resolution. One of the most extended indexes to measure the image quality in an objective way is the Image Quality Figure (I.Q.F.). This parameter is useful to calculate the image quality taking into account the contrast and detail resolution of the image analysed. The contrast-detail curve is useful as a measure of the image quality too, because it is a graphical representation in which the hole thickness and diameter are plotted for each contrast-detail combination detected in the radiographic image of the phantom. It is useful for the comparison of the functioning of different radiographic image systems, for phantom images under the same exposition conditions. The aim of this work is to study the image quality of different images contrast-detail phantom C.D.M.A.M. 3.4, carrying out the automatic detection of the contrast-detail combination and to establish a parameter which characterize in an objective way the mammographic image quality. This is useful to compare images obtained at different digital mammographic equipments to study the functioning of the equipments. (authors)

  1. Machine learning approaches in medical image analysis

    DEFF Research Database (Denmark)

    de Bruijne, Marleen

    2016-01-01

    Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols......, learning from weak labels, and interpretation and evaluation of results....

  2. Introduction to image processing and analysis

    CERN Document Server

    Russ, John C

    2007-01-01

    ADJUSTING PIXEL VALUES Optimizing Contrast Color Correction Correcting Nonuniform Illumination Geometric Transformations Image Arithmetic NEIGHBORHOOD OPERATIONS Convolution Other Neighborhood Operations Statistical Operations IMAGE PROCESSING IN THE FOURIER DOMAIN The Fourier Transform Removing Periodic Noise Convolution and Correlation Deconvolution Other Transform Domains Compression BINARY IMAGES Thresholding Morphological Processing Other Morphological Operations Boolean Operations MEASUREMENTS Global Measurements Feature Measurements Classification APPENDIX: SOFTWARE REFERENCES AND LITERATURE INDEX.

  3. Principal component analysis of psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

  4. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  5. Development of knowledge models by linguistic analysis of lexical relationships in technical documents

    International Nuclear Information System (INIS)

    Seguela, Patrick

    2001-01-01

    This research thesis addresses the problem of knowledge acquisition and structuring from technical texts, and the use of this knowledge in the development of models. The author presents the Cameleon method which aims at extracting binary lexical relationships from technical texts by identifying linguistic markers. The relevance of this method is assessed in the case of four different corpuses: a written technical corpus, an oral technical corpus, a corpus of texts of instructions, and a corpus of academic texts. The author reports the development of a model of representation of knowledge of a specific field by using lexical relationships. The method is then applied to develop a model used in document search within a knowledge management system [fr

  6. Why continuity of care needs computing--results of a quantitative document analysis.

    Science.gov (United States)

    Hübner, Ursula; Giehoff, Carsten

    2002-01-01

    With the increasing practice of early discharges from hospital continuity of care is attaining new attention. Although communication gaps including incomplete and delayed information are well known the vast majority of discharge summaries are still paperbased. The question of this study was whether The medium paperbased discharge form can be structured and complete enough to overcome the communication deficits. All 32 different types of nursing discharge forms analysed suffered from a compromise between structure, content and space. None of the forms allowed for a complete documentation of the nursing process. Only the minority of forms offered the complete set of activities of daily living to be checked. Most of the forms were designed only by one sector without any provisions for different terminologies and needs across the sectors. It is concluded that these shortcomings can only be overcome by a dynamic and flexible instrument, ideally by an electronic discharge summary.

  7. Image pattern recognition supporting interactive analysis and graphical visualization

    Science.gov (United States)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  8. Combined aerial and terrestrial images for complete 3D documentation of Singosari Temple based on Structure from Motion algorithm

    Science.gov (United States)

    Hidayat, Husnul; Cahyono, A. B.

    2016-11-01

    Singosaritemple is one of cultural heritage building in East Java, Indonesia which was built in 1300s and restorated in 1934-1937. Because of its history and importance, complete documentation of this temple is required. Nowadays with the advent of low cost UAVs combining aerial photography with terrestrial photogrammetry gives more complete data for 3D documentation. This research aims to make complete 3D model of this landmark from aerial and terrestrial photographs with Structure from Motion algorithm. To establish correct scale, position, and orientation, the final 3D model was georeferenced with Ground Control Points in UTM 49S coordinate system. The result shows that all facades, floor, and upper structures can be modeled completely in 3D. In terms of 3D coordinate accuracy, the Root Mean Square Errors (RMSEs) are RMSEx=0,041 m; RMSEy=0,031 m; RMSEz=0,049 m which represent 0.071 m displacement in 3D space. In addition the mean difference of lenght measurements of the object is 0,057 m. With this accuracy, this method can be used to map the site up to 1:237 scale. Although the accuracy level is still in centimeters, the combined aerial and terrestrial photographs with Structure from Motion algorithm can provide complete and visually interesting 3D model.

  9. Studies on computer analysis for radioisotope images

    International Nuclear Information System (INIS)

    Takizawa, Masaomi

    1977-01-01

    A hybrid type image file and processing system are devised by the author to file and the radioisotope image processing with analog display. This system has some functions as follows; Ten thousand images can be stored in 60 feets length video-tape-recorder (VTR) tape. Maximum of an image on the VTR tape is within 15 sec. An image display enabled by the analog memory, which has brightness more than 15 gray levels. By using the analog memories, effective image processing can be done by the small computer. Many signal sources can be inputted into the hybrid system. This system can be applied many fields to both routine works and multi-purpose radioisotope image processing. (auth.)

  10. High Resolution Proteomic Analysis of the Cervical Cancer Cell Lines Secretome Documents Deregulation of Multiple Proteases.

    Science.gov (United States)

    Pappa, Kalliopi I; Kontostathi, Georgia; Makridakis, Manousos; Lygirou, Vasiliki; Zoidakis, Jerome; Daskalakis, George; Anagnou, Nicholas P

    2017-01-01

    Oncogenic infection by HPV, eventually leads to cervical carcinogenesis, associated by deregulation of specific pathways and protein expression at the intracellular and secretome level. Thus, secretome analysis can elucidate the biological mechanisms contributing to cervical cancer. In the present study we systematically analyzed its constitution in four cervical cell lines employing a highly sensitive proteomic technology coupled with bioinformatics analysis. LC/MS-MS proteomics and bioinformatics analysis were performed in the secretome of four informative cervical cell lines SiHa (HPV16 + ), HeLa (HPV18 + ), C33A (HPV - ) and HCK1T (normal). The proteomic pattern of each cancer cell line compared to HCK1T was identified and a detailed bioinformatics analysis disclosed inhibition of matrix metalloproteases in cancer cell lines. This prediction was further confirmed via zymography for MMP-2 and MMP-9, western blot analysis for ADAM10 and by MRM for TIMP1. The differential expression of important secreted proteins such as CATD, FUCA1 and SOD2 was also confirmed by western blot analysis. MRM-targeted proteomics analysis confirmed the differential expression of CATD, CATB, SOD2, QPCT and NEU1. High resolution proteomics analysis of cervical cancer secretome revealed significantly deregulated biological processes and proteins implicated in cervical carcinogenesis. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  11. Automatic quantitative analysis of cardiac MR perfusion images

    NARCIS (Netherlands)

    Breeuwer, Marcel; Spreeuwers, Lieuwe Jan; Quist, Marcel

    2001-01-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the

  12. Study and analysis of wavelet based image compression techniques ...

    African Journals Online (AJOL)

    This paper presented comprehensive study with performance analysis of very recent Wavelet transform based image compression techniques. Image compression is one of the necessities for such communication. The goals of image compression are to minimize the storage requirement and communication bandwidth.

  13. Mesh Processing in Medical-Image Analysis-a Tutorial

    DEFF Research Database (Denmark)

    Levine, Joshua A.; Paulsen, Rasmus Reinhold; Zhang, Yongjie

    2012-01-01

    Medical-image analysis requires an understanding of sophisticated scanning modalities, constructing geometric models, building meshes to represent domains, and downstream biological applications. These four steps form an image-to-mesh pipeline. For research in this field to progress, the imaging,...

  14. Image quality preferences among radiographers and radiologists. A conjoint analysis

    International Nuclear Information System (INIS)

    Ween, Borgny; Kristoffersen, Doris Tove; Hamilton, Glenys A.; Olsen, Dag Rune

    2005-01-01

    Purpose: The aim of this study was to investigate the image quality preferences among radiographers and radiologists. The radiographers' preferences are mainly related to technical parameters, whereas radiologists assess image quality based on diagnostic value. Methods: A conjoint analysis was undertaken to survey image quality preferences; the study included 37 respondents: 19 radiographers and 18 radiologists. Digital urograms were post-processed into 8 images with different properties of image quality for 3 different patients. The respondents were asked to rank the images according to their personally perceived subjective image quality. Results: Nearly half of the radiographers and radiologists were consistent in their ranking of the image characterised as 'very best image quality'. The analysis showed, moreover, that chosen filtration level and image intensity were responsible for 72% and 28% of the preferences, respectively. The corresponding figures for each of the two professions were 76% and 24% for the radiographers, and 68% and 32% for the radiologists. In addition, there were larger variations in image preferences among the radiologists, as compared to the radiographers. Conclusions: Radiographers revealed a more consistent preference than the radiologists with respect to image quality. There is a potential for image quality improvement by developing sets of image property criteria

  15. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    Science.gov (United States)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  16. Diffraction efficiency and noise analysis of hidden image holograms

    DEFF Research Database (Denmark)

    Tamulevičius, Sigitas; Andrulevičius, Mindaugas; Puodžiukynas, Linas

    2017-01-01

    The simplified approach for analysis of hidden image holograms is discussed in this paper. Diffraction efficiency and signal to noise ratio of reconstructed images were investigated using direct measurements technique and digitized image analysis employing “ImageJ” software. All holograms were...... energy densities demonstrated improved diffraction efficiency and reduced signal to noise ratio of the reconstructed image. The best diffraction efficiency at sufficient signal to noise ratio was obtained using exposure energy density in the range from 150 to 200 J/m2 during the hologram writing process....

  17. Comparative study of plans for integrated residue management of construction: an analysis documental

    Directory of Open Access Journals (Sweden)

    Jorge Henrique e Silva Júnior

    2014-02-01

    Full Text Available Objetivo: o presente trabalho faz um estudo comparativo dos Planos integrados de quatro cidades, destacando os pontos que estão de acordo com a resolução 307/2002 do CONAMA. Método: Trata-se de uma pesquisa bibliográfica e documental tendo como fontes artigos científicos e os Planos Integrados de Gerenciamento de Resíduos Sólidos da Construção Civil de cinco cidades brasileiras: Curitiba, Cuiabá, Florianópolis, Rio de Janeiro e São Paulo. Resultados: A resolução prevê o Plano Integrado de Gerenciamento de Resíduos da Construção Civil, como instrumento para implementação da gestão dos resíduos da construção civil, que deve ser elaborado pelos municípios. Muitas capitais ainda não elaboraram seus Planos Integrados de Gerenciamento de Resíduos da Construção Civil. Conclusão: O Plano Integrado de Gerenciamento dos Resíduos da Construção Civil é de grande importância, pois esses resíduos trazem inúmeros problemas ambientais e de saúde.

  18. The right to the city and International Urban Agendas: a document analysis.

    Science.gov (United States)

    Andrade, Elisabete Agrela de; Franceschini, Maria Cristina Trousdell

    2017-12-01

    Considering social, economic and demographic issues, living in the city implies inadequate living conditions, social exclusion, inequities and other problems to the population. At the same time, the city is a setting of cultural, social and affective production. As a result, there is a need to reflect on the right to the city and its relationship with promoting the health of its inhabitants. To that effect, urban agendas have been developed to address the city's ambiguity. This paper aims to analyze four of these agendas through the lenses of Health Promotion. A qualitative document review approach was conducted on urban agendas proposed by international organizations and applied to the Brazilian context: Healthy Cities, Sustainable Cities, Smart Cities and Educating Cities. Results indicate some level of effort by the analyzed agendas to assume social participation, intersectoriality and the territory as central to addressing exclusion and inequities. However, more in-depth discussions are required on each of these concepts. We conclude that urban agendas can contribute greatly toward consolidating the right to the city, provided that their underpinning concepts are critically comprehended.

  19. Slavery Service Accounting Practices in Brazil: A Bibliographic and Document Analysis

    Directory of Open Access Journals (Sweden)

    Adriana Rodrigues Silva

    2014-12-01

    Full Text Available This study focuses on the social and economic aspects and institutional relationships that determined a unique pattern of inequality. We aim to examine the particular role of accounting as a practice used to dehumanize an entire class of people. The primary purpose of this study is not to examine slavery's profitability but rather to identify how accounting practices served slavery. A qualitative research method is applied in this study. Regarding technical procedures, this study makes use of bibliographic and documentary sources. For the purpose of this investigation, and in accordance with bibliographic and documentary research methods, we analyze scientific articles, books and documents from the Brazilian National Archive, the Brazilian Historic and Geographic Institute and the Brazilian National Library Foundation. In light of what was discovered through the study's development, we can consider accounting as a tool that is more active than passive and, therefore, as a tool that was used to support the slave regime. In essence, accounting was used to convert a human's qualitative attributes into a limited number of categories (age, gender, race, through which slaves were differentiated and monetized to facilitate commercial trafficking. We believe that accounting practices facilitated slave trading, conversion and exploitation, procedures that completely ignored qualitative and human dimensions of slavery. Opportunities for future studies on accounting in the slave period, as is the case of other oppressive regimes, are infinite, especially in the case of Brazil.

  20. Analysis of Two-Dimensional Electrophoresis Gel Images

    DEFF Research Database (Denmark)

    Pedersen, Lars

    2002-01-01

    This thesis describes and proposes solutions to some of the currently most important problems in pattern recognition and image analysis of two-dimensional gel electrophoresis (2DGE) images. 2DGE is the leading technique to separate individual proteins in biological samples with many biological...... and pharmaceutical applications, e.g., drug development. The technique results in an image, where the proteins appear as dark spots on a bright background. However, the analysis of these images is very time consuming and requires a large amount of manual work so there is a great need for fast, objective, and robust...... methods based on image analysis techniques in order to significantly accelerate this key technology. The methods described and developed fall into three categories: image segmentation, point pattern matching, and a unified approach simultaneously segmentation the image and matching the spots. The main...

  1. Space station system analysis study. Part 3: Documentation. Volume 2: Technical report. [structural design and construction

    Science.gov (United States)

    1977-01-01

    An analysis of construction operation is presented as well as power system sizing requirements. Mission hardware requirements are reviewed in detail. Space construction base and design configurations are also examined.

  2. Microsoft Academic Automatic Document Searches: Accuracy for Journal Articles and Suitability for Citation Analysis

    OpenAIRE

    Thelwall, Mike

    2017-01-01

    Microsoft Academic is a free academic search engine and citation index that is similar to Google Scholar but can be automatically queried. Its data is potentially useful for bibliometric analysis if it is possible to search effectively for individual journal articles. This article compares different methods to find journal articles in its index by searching for a combination of title, authors, publication year and journal name and uses the results for the widest published correlation analysis...

  3. IMAGE ANALYSIS FOR MODELLING SHEAR BEHAVIOUR

    Directory of Open Access Journals (Sweden)

    Philippe Lopez

    2011-05-01

    Full Text Available Through laboratory research performed over the past ten years, many of the critical links between fracture characteristics and hydromechanical and mechanical behaviour have been made for individual fractures. One of the remaining challenges at the laboratory scale is to directly link fracture morphology of shear behaviour with changes in stress and shear direction. A series of laboratory experiments were performed on cement mortar replicas of a granite sample with a natural fracture perpendicular to the axis of the core. Results show that there is a strong relationship between the fracture's geometry and its mechanical behaviour under shear stress and the resulting damage. Image analysis, geostatistical, stereological and directional data techniques are applied in combination to experimental data. The results highlight the role of geometric characteristics of the fracture surfaces (surface roughness, size, shape, locations and orientations of asperities to be damaged in shear behaviour. A notable improvement in shear understanding is that shear behaviour is controlled by the apparent dip in the shear direction of elementary facets forming the fracture.

  4. Convergence analysis in near-field imaging

    International Nuclear Information System (INIS)

    Bao, Gang; Li, Peijun

    2014-01-01

    This paper is devoted to the mathematical analysis of the direct and inverse modeling of the diffraction by a perfectly conducting grating surface in the near-field regime. It is motivated by our effort to analyze recent significant numerical results, in order to solve a class of inverse rough surface scattering problems in near-field imaging. In a model problem, the diffractive grating surface is assumed to be a small and smooth deformation of a plane surface. On the basis of the variational method, the direct problem is shown to have a unique weak solution. An analytical solution is introduced as a convergent power series in the deformation parameter by using the transformed field and Fourier series expansions. A local uniqueness result is proved for the inverse problem where only a single incident field is needed. On the basis of the analytic solution of the direct problem, an explicit reconstruction formula is presented for recovering the grating surface function with resolution beyond the Rayleigh criterion. Error estimates for the reconstructed grating surface are established with fully revealed dependence on such quantities as the surface deformation parameter, measurement distance, noise level of the scattering data, and regularity of the exact grating surface function. (paper)

  5. VLSI Architectures For Syntactic Image Analysis

    Science.gov (United States)

    Chiang, Y. P.; Fu, K. S.

    1984-01-01

    Earley's algorithm has been commonly used for the parsing of general context-free languages and error-correcting parsing in syntactic pattern recognition. The time complexity for parsing is 0(n3). In this paper we present a parallel Earley's recognition algorithm in terms of "x*" operation. By restricting the input context-free grammar to be X-free, we are able to implement this parallel algorithm on a triangular shape VLSI array. This system has an efficient way of moving data to the right place at the right time. Simulation results show that this system can recognize a string with length n in 2n+1 system time. We also present an error-correcting recognition algorithm. The parallel error-correcting recognition algorithm has also been im-plemented on a triangular VLSI array. This array recognizes an erroneous string length n in time 2n+1 and gives the correct error count. Applications of the proposed VLSI architectures to image analysis are illus-trated by examples.

  6. CMOS Image Sensor with On-Chip Image Compression: A Review and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Milin Zhang

    2010-01-01

    Full Text Available Demand for high-resolution, low-power sensing devices with integrated image processing capabilities, especially compression capability, is increasing. CMOS technology enables the integration of image sensing and image processing, making it possible to improve the overall system performance. This paper reviews the current state of the art in CMOS image sensors featuring on-chip image compression. Firstly, typical sensing systems consisting of separate image-capturing unit and image-compression processing unit are reviewed, followed by systems that integrate focal-plane compression. The paper also provides a thorough review of a new design paradigm, in which image compression is performed during the image-capture phase prior to storage, referred to as compressive acquisition. High-performance sensor systems reported in recent years are also introduced. Performance analysis and comparison of the reported designs using different design paradigm are presented at the end.

  7. MORPHOLOGY BY IMAGE ANALYSIS K. Belaroui and M. N Pons ...

    African Journals Online (AJOL)

    31 déc. 2012 ... Keywords: Characterization; particle size; morphology; image analysis; porous media. 1. INTRODUCTION. La puissance de l'analyse d'images comme ... en une image numérique au moyen d'un convertisseur analogique digital (A/D). Les points de l'image sont disposés suivant une grille en réseau carré, ...

  8. PIZZARO: Forensic analysis and restoration of image and video data

    Czech Academy of Sciences Publication Activity Database

    Kamenický, Jan; Bartoš, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozámský, Adam; Saic, Stanislav; Šroubek, Filip; Šorel, Michal; Zita, Aleš; Zitová, Barbara; Šíma, Z.; Švarc, P.; Hořínek, J.

    2016-01-01

    Roč. 264, č. 1 (2016), s. 153-166 ISSN 0379-0738 R&D Projects: GA MV VG20102013064; GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Image forensic analysis * Image restoration * Image tampering detection * Image source identification Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/kamenicky-0459504.pdf

  9. Measure by image analysis of industrial radiographs

    International Nuclear Information System (INIS)

    Brillault, B.

    1988-01-01

    A digital radiographic picture processing system for non destructive testing intends to provide the expert with computer tool, to precisely quantify radiographic images. The author describes the main problems, from the image formation to its characterization. She also insists on the necessity to define a precise process in order to automatize the system. Some examples illustrate the efficiency of digital processing for radiographic images [fr

  10. Quantitative image measurements for outcome analysis of lung nodule treatment

    Science.gov (United States)

    Zhu, Xiaoming; Lee, Ki-Nam; Wong, Stephen T. C.; Huang, H. K.

    1996-04-01

    In this study, we designed and implemented a temporal image database for outcome analysis of lung nodules based on spiral CT images. The software package is composed of three parts. They are, respectively, a database management system which stores patient image data and nodule information; a user-friendly graphical user interface which allows a user to interface with the image database; and image processing tools that are designed to segment out lung nodules in the CT image with a simple mouse click anywhere inside a nodule. The image database uses the relational Sybase database system. Patient images and nodule information are stored in separate tables. Software interface has been designed to allow a user to retrieve any patient study from the picture archiving and communication system into the image database.

  11. New approaches in intelligent image analysis techniques, methodologies and applications

    CERN Document Server

    Nakamatsu, Kazumi

    2016-01-01

    This book presents an Introduction and 11 independent chapters, which are devoted to various new approaches of intelligent image processing and analysis. The book also presents new methods, algorithms and applied systems for intelligent image processing, on the following basic topics: Methods for Hierarchical Image Decomposition; Intelligent Digital Signal Processing and Feature Extraction; Data Clustering and Visualization via Echo State Networks; Clustering of Natural Images in Automatic Image Annotation Systems; Control System for Remote Sensing Image Processing; Tissue Segmentation of MR Brain Images Sequence; Kidney Cysts Segmentation in CT Images; Audio Visual Attention Models in Mobile Robots Navigation; Local Adaptive Image Processing; Learning Techniques for Intelligent Access Control; Resolution Improvement in Acoustic Maps. Each chapter is self-contained with its own references. Some of the chapters are devoted to the theoretical aspects while the others are presenting the practical aspects and the...

  12. Analysis of 30 Spinal Angiograms Falsely Reported as Normal in 18 Patients with Subsequently Documented Spinal Vascular Malformations.

    Science.gov (United States)

    Barreras, P; Heck, D; Greenberg, B; Wolinsky, J-P; Pardo, C A; Gailloud, P

    2017-09-01

    The early diagnosis of spinal vascular malformations suffers from the nonspecificity of their clinical and radiologic presentations. Spinal angiography requires a methodical approach to offer a high diagnostic yield. The prospect of false-negative studies is particularly distressing when addressing conditions with a narrow therapeutic window. The purpose of this study was to identify factors leading to missed findings or inadequate studies in patients with spinal vascular malformations. The clinical records, laboratory findings, and imaging features of 18 patients with spinal arteriovenous fistulas and at least 1 prior angiogram read as normal were reviewed. The clinical status was evaluated before and after treatment by using the Aminoff-Logue Disability Scale. Eighteen patients with 19 lesions underwent a total of 30 negative spinal angiograms. The lesions included 9 epidural arteriovenous fistulas, 8 dural arteriovenous fistulas, and 2 perimedullary arteriovenous fistulas. Seventeen patients underwent endovascular (11) or surgical (6) treatment, with a delay ranging between 1 week and 32 months; the Aminoff-Logue score improved in 13 (76.5%). The following factors were identified as the causes of the inadequate results: 1) lesion angiographically documented but not identified (55.6%); 2) region of interest not documented (29.6%); or 3) level investigated but injection technically inadequate (14.8%). All the angiograms falsely reported as normal were caused by correctible, operator-dependent factors. The nonrecognition of documented lesions was the most common cause of error. The potential for false-negative studies should be reduced by the adoption of rigorous technical and training standards and by second opinion reviews. © 2017 by American Journal of Neuroradiology.

  13. Documentation of Hanford Site independent review of the Hanford Waste Vitrification Plant Preliminary Safety Analysis Report

    International Nuclear Information System (INIS)

    Herborn, D.I.

    1993-11-01

    Westinghouse Hanford Company (WHC) is the Integrating Contractor for the Hanford Waste Vitrification Plant (HWVP) Project, and as such is responsible for preparation of the HWVP Preliminary Safety Analysis Report (PSAR). The HWVP PSAR was prepared pursuant to the requirements for safety analyses contained in US Department of Energy (DOE) Orders 4700.1, Project Management System (DOE 1987); 5480.5, Safety of Nuclear Facilities (DOE 1986a); 5481.lB, Safety Analysis and Review System (DOE 1986b) which was superseded by DOE order 5480-23, Nuclear Safety Analysis Reports, for nuclear facilities effective April 30, 1992 (DOE 1992); and 6430.lA, General Design Criteria (DOE 1989). The WHC procedures that, in large part, implement these DOE requirements are contained in WHC-CM-4-46, Nonreactor Facility Safety Analysis Manual. This manual describes the overall WHC safety analysis process in terms of requirements for safety analyses, responsibilities of the various contributing organizations, and required reviews and approvals

  14. Analysis of engineering drawings and raster map images

    CERN Document Server

    Henderson, Thomas C

    2013-01-01

    Presents up-to-date methods and algorithms for the automated analysis of engineering drawings and digital cartographic maps Discusses automatic engineering drawing and map analysis techniques Covers detailed accounts of the use of unsupervised segmentation algorithms to map images

  15. Transnational tobacco company interests in smokeless tobacco in Europe: analysis of internal industry documents and contemporary industry materials.

    Directory of Open Access Journals (Sweden)

    Silvy Peeters

    Full Text Available European Union (EU legislation bans the sale of snus, a smokeless tobacco (SLT which is considerably less harmful than smoking, in all EU countries other than Sweden. To inform the current review of this legislation, this paper aims to explore transnational tobacco company (TTC interests in SLT and pure nicotine in Europe from the 1970s to the present, comparing them with TTCs' public claims of support for harm reduction.Internal tobacco industry documents (in total 416 documents dating from 1971 to 2009, obtained via searching the online Legacy Tobacco Documents Library, were analysed using a hermeneutic approach. This library comprises documents obtained via litigation in the US and does not include documents from Imperial Tobacco, Japan Tobacco International, or Swedish Match. To help overcome this limitation and provide more recent data, we triangulated our documentary findings with contemporary documentation including TTC investor presentations. The analysis demonstrates that British American Tobacco explored SLT opportunities in Europe from 1971 driven by regulatory threats and health concerns, both likely to impact cigarette sales negatively, and the potential to create a new form of tobacco use among those no longer interested in taking up smoking. Young people were a key target. TTCs did not, however, make SLT investments until 2002, a time when EU cigarette volumes started declining, smoke-free legislation was being introduced, and public health became interested in harm reduction. All TTCs have now invested in snus (and recently in pure nicotine, yet both early and recent snus test markets appear to have failed, and little evidence was found in TTCs' corporate materials that snus is central to their business strategy.There is clear evidence that BAT's early interest in introducing SLT in Europe was based on the potential for creating an alternative form of tobacco use in light of declining cigarette sales and social restrictions on

  16. Transnational tobacco company interests in smokeless tobacco in Europe: analysis of internal industry documents and contemporary industry materials.

    Science.gov (United States)

    Peeters, Silvy; Gilmore, Anna B

    2013-01-01

    European Union (EU) legislation bans the sale of snus, a smokeless tobacco (SLT) which is considerably less harmful than smoking, in all EU countries other than Sweden. To inform the current review of this legislation, this paper aims to explore transnational tobacco company (TTC) interests in SLT and pure nicotine in Europe from the 1970s to the present, comparing them with TTCs' public claims of support for harm reduction. Internal tobacco industry documents (in total 416 documents dating from 1971 to 2009), obtained via searching the online Legacy Tobacco Documents Library, were analysed using a hermeneutic approach. This library comprises documents obtained via litigation in the US and does not include documents from Imperial Tobacco, Japan Tobacco International, or Swedish Match. To help overcome this limitation and provide more recent data, we triangulated our documentary findings with contemporary documentation including TTC investor presentations. The analysis demonstrates that British American Tobacco explored SLT opportunities in Europe from 1971 driven by regulatory threats and health concerns, both likely to impact cigarette sales negatively, and the potential to create a new form of tobacco use among those no longer interested in taking up smoking. Young people were a key target. TTCs did not, however, make SLT investments until 2002, a time when EU cigarette volumes started declining, smoke-free legislation was being introduced, and public health became interested in harm reduction. All TTCs have now invested in snus (and recently in pure nicotine), yet both early and recent snus test markets appear to have failed, and little evidence was found in TTCs' corporate materials that snus is central to their business strategy. There is clear evidence that BAT's early interest in introducing SLT in Europe was based on the potential for creating an alternative form of tobacco use in light of declining cigarette sales and social restrictions on smoking, with

  17. Transnational Tobacco Company Interests in Smokeless Tobacco in Europe: Analysis of Internal Industry Documents and Contemporary Industry Materials

    Science.gov (United States)

    Peeters, Silvy; Gilmore, Anna B.

    2013-01-01

    Background European Union (EU) legislation bans the sale of snus, a smokeless tobacco (SLT) which is considerably less harmful than smoking, in all EU countries other than Sweden. To inform the current review of this legislation, this paper aims to explore transnational tobacco company (TTC) interests in SLT and pure nicotine in Europe from the 1970s to the present, comparing them with TTCs' public claims of support for harm reduction. Methods and Results Internal tobacco industry documents (in total 416 documents dating from 1971 to 2009), obtained via searching the online Legacy Tobacco Documents Library, were analysed using a hermeneutic approach. This library comprises documents obtained via litigation in the US and does not include documents from Imperial Tobacco, Japan Tobacco International, or Swedish Match. To help overcome this limitation and provide more recent data, we triangulated our documentary findings with contemporary documentation including TTC investor presentations. The analysis demonstrates that British American Tobacco explored SLT opportunities in Europe from 1971 driven by regulatory threats and health concerns, both likely to impact cigarette sales negatively, and the potential to create a new form of tobacco use among those no longer interested in taking up smoking. Young people were a key target. TTCs did not, however, make SLT investments until 2002, a time when EU cigarette volumes started declining, smoke-free legislation was being introduced, and public health became interested in harm reduction. All TTCs have now invested in snus (and recently in pure nicotine), yet both early and recent snus test markets appear to have failed, and little evidence was found in TTCs' corporate materials that snus is central to their business strategy. Conclusions There is clear evidence that BAT's early interest in introducing SLT in Europe was based on the potential for creating an alternative form of tobacco use in light of declining cigarette sales

  18. Draft Regulatory Analysis. Technical support document No. 1: energy efficiency standards for consumer products

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-06-01

    A Draft Regulatory Analysis is presented that describes the analyses performed by DOE to arrive at proposed energy efficiency standards for refrigerators and refrigerator-freezers, freezers, clothes dryers, water heaters, room air conditioners, kitchen ranges and ovens, central air conditioners (cooling only), and furnaces. Standards for dishwashers, television sets, clothes washers, and humidifiers and dehumidifiders are required to be published in the Federal Register no later than December 1981. Standards for central air conditioners (heat pumps) and home heating equipment are to be published in the Federal Register no later than January 1982. Accordingly, these products are not discussed in this Draft Regulatory Analysis.

  19. Development of a traceability analysis method based on case grammar for NPP requirement documents written in Korean language

    International Nuclear Information System (INIS)

    Yoo, Yeong Jae; Seong, Poong Hyun; Kim, Man Cheol

    2004-01-01

    Software inspection is widely believed to be an effective method for software verification and validation (V and V). However, software inspection is labor-intensive and, since it uses little technology, software inspection is viewed upon as unsuitable for a more technology-oriented development environment. Nevertheless, software inspection is gaining in popularity. KAIST Nuclear I and C and Information Engineering Laboratory (NICIEL) has developed software management and inspection support tools, collectively named 'SIS-RT.' SIS-RT is designed to partially automate the software inspection processes. SIS-RT supports the analyses of traceability between a given set of specification documents. To make SIS-RT compatible for documents written in Korean, certain techniques in natural language processing have been studied. Among the techniques considered, case grammar is most suitable for analyses of the Korean language. In this paper, we propose a methodology that uses a case grammar approach to analyze the traceability between documents written in Korean. A discussion regarding some examples of such an analysis will follow

  20. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    Science.gov (United States)

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  1. Medical infrared imaging and orthostatic analysis to determine ...

    African Journals Online (AJOL)

    Analysis was performed in a static position. The asymmetry index for each stance variable and optimal cutoff point for the peak vertical force and thermal image temperatures were calculated. Image pattern analysis revealed 88% success in differentiating the lame group, and 100% in identifying the same thermal pattern in ...

  2. Standardization of Image Quality Analysis – ISO 19264

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad; Wüller, Dietmar

    2016-01-01

    There are a variety of image quality analysis tools available for the archiving world, which are based on different test charts and analysis algorithms. ISO has formed a working group in 2012 to harmonize these approaches and create a standard way of analyzing the image quality for archiving...

  3. Facial Image Analysis Based on Local Binary Patterns: A Survey

    NARCIS (Netherlands)

    Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.

    2011-01-01

    Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer

  4. Geographic Object-Based Image Analysis: Towards a new paradigm

    NARCIS (Netherlands)

    Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.A.; Queiroz Feitosa, R.; van der Meer, F.D.; van der Werff, H.M.A.; van Coillie, F.; Tiede, A.

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature

  5. Evaluating fracture healing using digital x-ray image analysis

    African Journals Online (AJOL)

    2011-03-02

    Mar 2, 2011 ... of Edinburgh, developing techniques for assessing fracture healing using digital X-ray image analysis. She currently works in ... Digital X-ray combined with image analysis could provide a simple and cost-effective solution to this problem. .... output in which post-processing can be controlled. If digital X-ray ...

  6. Multi-spectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2011-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. In this study multi-spectral image analysis of pellets was performed using LDA, QDA, SNV and PCA on pixel level and mean value of pixels...

  7. Coupling the image analysis and the artificial neural networks to ...

    African Journals Online (AJOL)

    ... from a non-destructive method (Image Analysis) which was used in order to characterize the homogeneity of powder mixture in a V-Blender as well as a Cubic Blender which are most used in the pharmaceutical industry. Keywords: ANN; Image analysis; Homogeneity; Back-propagation algorithm; multi-layer perceptron ...

  8. Analysis of licensed South African diagnostic imaging equipment ...

    African Journals Online (AJOL)

    Analysis of licensed South African diagnostic imaging equipment. ... Pan African Medical Journal ... Introduction: Objective: To conduct an analysis of all registered South Africa (SA) diagnostic radiology equipment, assess the number of equipment units per capita by imaging modality, and compare SA figures with published ...

  9. Documentation Service

    International Nuclear Information System (INIS)

    Charnay, J.; Chosson, L.; Croize, M.; Ducloux, A.; Flores, S.; Jarroux, D.; Melka, J.; Morgue, D.; Mottin, C.

    1998-01-01

    This service assures the treatment and diffusion of the scientific information and the management of the scientific production of the institute as well as the secretariat operation for the groups and services of the institute. The report on documentation-library section mentions: the management of the documentation funds, search in international databases (INIS, Current Contents, Inspects), Pret-Inter service which allows accessing documents through DEMOCRITE network of IN2P3. As realizations also mentioned are: the setup of a video, photo database, the Web home page of the institute's library, follow-up of digitizing the document funds by integrating the CD-ROMs and diskettes, electronic archiving of the scientific production, etc

  10. Maury Documentation

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Supporting documentation for the Maury Collection of marine observations. Includes explanations from Maury himself, as well as guides and descriptions by the U.S....

  11. On Texture and Geometry in Image Analysis

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John

    2009-01-01

    -out of the captured scene will also change. At large viewing distances, the sky occupies a large region in the image and buildings, trees and lawns appear as uniformly colored regions. The following questions are addressed: How much of the visual appearance in terms of geometry and texture of an image can...

  12. Low-cost image analysis system

    Energy Technology Data Exchange (ETDEWEB)

    Lassahn, G.D.

    1995-01-01

    The author has developed an Automatic Target Recognition system based on parallel processing using transputers. This approach gives a powerful, fast image processing system at relatively low cost. This system scans multi-sensor (e.g., several infrared bands) image data to find any identifiable target, such as physical object or a type of vegetation.

  13. Experiences and Outcomes of Preschool Physical Education: An Analysis of Developmental Discourses in Scottish Curricular Documentation

    Science.gov (United States)

    McEvilly, Nollaig

    2014-01-01

    This article provides an analysis of developmental discourses underpinning preschool physical education in Scotland's Curriculum for Excellence. Implementing a post-structural perspective, the article examines the preschool experiences and outcomes related to physical education as presented in the Curriculum for Excellence "health and…

  14. Using the Front Page of "The Wall Street Journal" to Teach Document Design and Audience Analysis.

    Science.gov (United States)

    Moore, Patrick

    1989-01-01

    Explains an assignment for the audience analysis segment of a business writing course which compares the front page design of "The Wall Street Journal" with that of a local daily newspaper in order to emphasize the use of design devices in effectively writing to busy people. (SR)

  15. Digital image processing and analysis for activated sludge wastewater treatment.

    Science.gov (United States)

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  16. Detection of manipulations on printed images to address crime scene analysis: A case study.

    Science.gov (United States)

    Amerini, Irene; Caldelli, Roberto; Del Bimbo, Alberto; Di Fuccia, Andrea; Rizzo, Anna Paola; Saravo, Luigi

    2015-06-01

    Photographic documents both in digital and in printed format plays a fundamental role in crime scene analysis. Photos are crucial to reconstruct what happened and also to freeze the fact scenario with all the different present objects and evidences. Consequently, it is immediate to comprehend the paramount importance of the assessment of the authenticity of such images, to avoid that a possible malicious counterfeiting leads to a wrong evaluation of the circumstance. In this paper, a case study in which some printed photos, brought as documental evidences of a familiar murder, had been fraudulently modified to bias the final judgement is presented. In particular, the usage of CADET image forensic tool, to verify printed photos integrity, is introduced and discussed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Computerising documentation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    The nuclear power generation industry is faced with public concern and government pressures over safety, efficiency and risk. Operators throughout the industry are addressing these issues with the aid of a new technology - technical document management systems (TDMS). Used for strategic and tactical advantage, the systems enable users to scan, archive, retrieve, store, edit, distribute worldwide and manage the huge volume of documentation (paper drawings, CAD data and film-based information) generated in building, maintaining and ensuring safety in the UK's power plants. The power generation industry has recognized that the management and modification of operation critical information is vital to the safety and efficiency of its power plants. Regulatory pressure from the Nuclear Installations Inspectorate (NII) to operate within strict safety margins or lose Site Licences has prompted the need for accurate, up-to-data documentation. A document capture and management retrieval system provides a powerful cost-effective solution, giving rapid access to documentation in a tightly controlled environment. The computerisation of documents and plans is discussed in this article. (Author)

  18. MELDOQ - astrophysical image and pattern analysis in medicine: early recognition of malignant melanomas of the skin by digital image analysis. Final report

    International Nuclear Information System (INIS)

    Bunk, W.; Pompl, R.; Morfill, G.; Stolz, W.; Abmayr, W.

    1999-01-01

    Dermatoscopy is at present the most powerful clinical method for early detection of malignant melanomas. However, the application requires a lot of expertise and experience. Therefore, a quantitative image analysis system has been developed in order to assist dermatologists in 'on site diagnosis' and to improve the detection efficiency. Based on a very extensive dataset of dermatoscopic images, recorded in a standardized manner, a number of features for quantitative characterization of complex patterns in melanocytic skin lesions has been developed. The derived classifier improved the detection rate of malignant and benign melanocytic lesions to over 90% (sensitivity =91.5% and specificity =93.4% in the test set), using only six measures. A distinguishing feature of the system is the visualization of the quantified characteristics that are based on the dermatoscopic ABCD-rule. The developed prototype of a dermatoscopic workplace consists of defined procedures for standardized image acquisition and documentation, components of a necessary data pre-processing (e.g. shading- and colour-correction, removal of artefacts), quantification algorithms (evaluating asymmetry properties, border characteristics, the content of colours and structural components) and classification routines. In 2000 an industrial partner will begin marketing the digital imaging system including the specialized software for the early detection of skin cancer, which is suitable for clinicians and practitioners. The primary used nonlinear analysis techniques (e.g. scaling index method and others) can identify and characterize complex patterns in images and have a diagnostic potential in many other applications. (orig.) [de

  19. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  20. A novel image processing procedure for thermographic image analysis.

    Science.gov (United States)

    Matteoli, Sara; Coppini, Davide; Corvi, Andrea

    2018-03-14

    The imaging procedure shown in this paper has been developed for processing thermographic images, measuring the ocular surface temperature (OST) and visualizing the ocular thermal maps in a fast, reliable, and reproducible way. The strength of this new method is that the measured OSTs do not depend on the ocular geometry; hence, it is possible to compare the ocular profiles belonging to the same subject (right and left eye) as well as to different populations. In this paper, the developed procedure is applied on two subjects' eyes: a healthy case and another affected by an ocular malignant lesion. However, the method has already been tested on a bigger group of subjects for clinical purpose. For demonstrating the potentiality of this method, both intra- and inter-examiner repeatability were investigated in terms of coefficients of repeatability (COR). All OST indices showed repeatability with small intra-examiner (%COR 0.06-0.80) and inter-examiner variability (%COR 0.03-0.94). Measured OSTs and thermal maps clearly showed the clinical condition of the eyes investigated. The subject with no ocular pathology had no significant difference (P value = 0.25) between the OSTs of the right and left eye. On the contrary, the eye affected by a malignant lesion was significantly warmer (P value < 0.0001) than the contralateral, where the lesion was located. This new procedure demonstrated its reliability; it is featured by simplicity, immediacy, modularity, and genericity. The latter point is extremely precious as thermography has been used, in the last decades, in different clinical applications. Graphical abstract Ocular thermography and normalization process.

  1. Documentation for a Structural Optimization Procedure Developed Using the Engineering Analysis Language (EAL)

    Science.gov (United States)

    Martin, Carl J., Jr.

    1996-01-01

    This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.

  2. Cost-Effectiveness Analysis of the 2009 and 2012 IECC Residential Provisions – Technical Support Document

    Energy Technology Data Exchange (ETDEWEB)

    Mendon, Vrushali V.; Lucas, Robert G.; Goel, Supriya

    2012-12-04

    This analysis was conducted by Pacific Northwest National Laboratory (PNNL) in support of the U.S. Department of Energy’s (DOE) Building Energy Codes Program (BECP). DOE supports the development and adoption of efficient residential and commercial building energy codes. These codes set the minimum requirements for energy efficient building design and construction and ensure energy savings on a national level. This analysis focuses on one and two family dwellings, townhomes, and low-rise multifamily residential buildings. For these buildings, the basis of the energy codes is the International Energy Conservation Code (IECC). This report does not address commercial and high-rise residential buildings, which reference ANSI/ASHRAE/IES Standard 90.1.

  3. Technical Requirements Analysis and Control Systems (TRACS) Initial Operating Capability (IOC) documentation

    Science.gov (United States)

    Hammond, Dana P.

    1991-01-01

    The Technical Requirements Analysis and Control Systems (TRACS) software package is described. TRACS offers supplemental tools for the analysis, control, and interchange of project requirements. This package provides the fundamental capability to analyze and control requirements, serves a focal point for project requirements, and integrates a system that supports efficient and consistent operations. TRACS uses relational data base technology (ORACLE) in a stand alone or in a distributed environment that can be used to coordinate the activities required to support a project through its entire life cycle. TRACS uses a set of keyword and mouse driven screens (HyperCard) which imposes adherence through a controlled user interface. The user interface provides an interactive capability to interrogate the data base and to display or print project requirement information. TRACS has a limited report capability, but can be extended with PostScript conventions.

  4. Probabilistic risk assessment course documentation. Volume 5. System reliability and analysis techniques Session D - quantification

    International Nuclear Information System (INIS)

    Lofgren, E.V.

    1985-08-01

    This course in System Reliability and Analysis Techniques focuses on the probabilistic quantification of accident sequences and the link between accident sequences and consequences. Other sessions in this series focus on the quantification of system reliability and the development of event trees and fault trees. This course takes the viewpoint that event tree sequences or combinations of system failures and success are available and that Boolean equations for system fault trees have been developed and are available. 93 figs., 11 tabs

  5. Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation

    Science.gov (United States)

    Bastonero, P.; Donadio, E.; Chiabrando, F.; Spanò, A.

    2014-05-01

    Recognizing the various advantages offered by 3D new metric survey technologies in the Cultural Heritage documentation phase, this paper presents some tests of 3D model generation, using different methods, and their possible fusion. With the aim to define potentialities and problems deriving from integration or fusion of metric data acquired with different survey techniques, the elected test case is an outstanding Cultural Heritage item, presenting both widespread and specific complexities connected to the conservation of historical buildings. The site is the Staffarda Abbey, the most relevant evidence of medieval architecture in Piedmont. This application faced one of the most topical architectural issues consisting in the opportunity to study and analyze an object as a whole, from twice location of acquisition sensors, both the terrestrial and the aerial one. In particular, the work consists in the evaluation of chances deriving from a simple union or from the fusion of different 3D cloudmodels of the abbey, achieved by multi-sensor techniques. The aerial survey is based on a photogrammetric RPAS (Remotely piloted aircraft system) flight while the terrestrial acquisition have been fulfilled by laser scanning survey. Both techniques allowed to extract and process different point clouds and to generate consequent 3D continuous models which are characterized by different scale, that is to say different resolutions and diverse contents of details and precisions. Starting from these models, the proposed process, applied to a sample area of the building, aimed to test the generation of a unique 3Dmodel thorough a fusion of different sensor point clouds. Surely, the describing potential and the metric and thematic gains feasible by the final model exceeded those offered by the two detached models.

  6. The advanced scenario analysis for performance assessment of geological disposal. Pt. 3. Main document

    International Nuclear Information System (INIS)

    Ohkubo, Hiroo

    2004-02-01

    In 'H12 Project to Establish Technical Basis for HLW Disposal in Japan' an approach that is based on an international consensus was adopted to develop scenarios to be considered in performance assessment. Adequacy of the approach was, in general term, appreciated through the peer review. However it was also suggested that there are issues related to improving transparency and traceability of the procedure. Therefore, in the current financial year, in the first place a scenario development methodology was constructed taking into account the requirements identified last year. Furthermore a practical work-frame was developed to support the activities related to the scenario development. This work-frame was applied to an example scenario to check its applicability and identify issues for further research. Secondly, scenario analysis method with regard to perturbation scenario has been studied. First of all, a survey of perturbation scenario discussed in different countries has been carried out and its assessment has been examined. Especially, in Japan, technical information has been classified in order to assess three scenarios, which are seismic activity, faulting and igneous activity. Then, on the basis of assumed occurrence pattern and influence pattern for each perturbation scenario, variant type that should be considered in this analysis has been identified, and the concept of treatment, modeling data and requirements have been clarified. As a result of these researches, a future direction for advanced scenario analysis on performance assessment has been indicated, as well as associated issues to be discussed have been clarified. (author)

  7. PIZZARO: Forensic analysis and restoration of image and video data.

    Science.gov (United States)

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Image analysis of vocal fold histology

    Science.gov (United States)

    Reinisch, Lou; Garrett, C. Gaelyn

    2001-05-01

    To visualize the concentration gradients of collagen, elastin and ground substance in histologic sections of vocal folds, an image enhancement scheme was devised. Slides stained with Movat's solution were viewed on a light microscope. The image was digitally photographed. Using commercially available software, all pixels within a color range are selected from the mucosa presented on the image. Using the Movat's pentachrome stain, yellow to yellow-brown pixels represented mature collagen, blue to blue-green pixels represented young collagen (collagen that is not fully cross-linked) and black to dark violet pixels represented elastin. From each of the color range selections, a black and white image was created. The pixels not within the color range were black. The selected pixels within the color range were white. The image was averaged and smoothed to produce 256 levels of gray with less spatial resolution. This new grey-scale image showed the concentration gradient. These images were further enhanced with contour lines surrounding equivalent levels of gray. This technique is helpful to compare the micro-anatomy of the vocal folds. For instance, we find large concentration of the collagen deep in the mucosa and adjacent to the vocalis muscle.

  9. Transfer representation learning for medical image analysis.

    Science.gov (United States)

    Chuen-Kai Shie; Chung-Hisang Chuang; Chun-Nan Chou; Meng-Hsi Wu; Chang, Edward Y

    2015-08-01

    There are two major challenges to overcome when developing a classifier to perform automatic disease diagnosis. First, the amount of labeled medical data is typically very limited, and a classifier cannot be effectively trained to attain high disease-detection accuracy. Second, medical domain knowledge is required to identify representative features in data for detecting a target disease. Most computer scientists and statisticians do not have such domain knowledge. In this work, we show that employing transfer learning can remedy both problems. We use Otitis Media (OM) to conduct our case study. Instead of using domain knowledge to extract features from labeled OM images, we construct features based on a dataset entirely OM-irrelevant. More specifically, we first learn a codebook in an unsupervised way from 15 million images collected from ImageNet. The codebook gives us what the encoders consider being the fundamental elements of those 15 million images. We then encode OM images using the codebook and obtain a weighting vector for each OM image. Using the resulting weighting vectors as the feature vectors of the OM images, we employ a traditional supervised learning algorithm to train an OM classifier. The achieved detection accuracy is 88.5% (89.63% in sensitivity and 86.9% in specificity), markedly higher than all previous attempts, which relied on domain experts to help extract features.

  10. A linear mixture analysis-based compression for hyperspectral image analysis

    Energy Technology Data Exchange (ETDEWEB)

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  11. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  12. Analysis of geodetic and legal documentation in the process of expropriation for roads. Krakow case study

    Science.gov (United States)

    Trembecka, Anna

    2013-06-01

    Amendment to the Act on special rules of preparation and implementation of investment in public roads resulted in an accelerated mode of acquisition of land for the development of roads. The decision to authorize the execution of road investment issued on its basis has several effects, i.e. determines the location of a road, approves surveying division, approves construction design and also results in acquisition of a real property by virtue of law by the State Treasury or local government unit, among others. The conducted study revealed that over 3 years, in this mode, the city of Krakow has acquired 31 hectares of land intended for the implementation of road investments. Compensation is determined in separate proceedings based on an appraisal study estimating property value, often at a distant time after the loss of land by the owner. One reason for the lengthy compensation proceedings is challenging the proposed amount of compensation, unregulated legal status of the property as well as imprecise legislation. It is important to properly develop geodetic and legal documentation which accompanies the application for issuance of the decision and is also used in compensation proceedings. Zmiana ustawy o szczególnych zasadach przygotowywania i realizacji inwestycji w zakresie dróg publicznych spowodowała przyspieszony tryb pozyskiwania gruntów przeznaczonych pod budowę dróg. Wydawana na jej podstawie decyzja o zezwoleniu na realizację inwestycji drogowej wywołuje szereg skutków, tj. m.in. ustala lokalizację drogi, zatwierdza podziały geodezyjne, zatwierdza projekt budowlany a także powoduje nabycie nieruchomości z mocy prawa, przez Skarb Państwa lub jednostki samorządu terytorialnego. Przeprowadzone badania wykazały iż w powyższym trybie miasto Kraków nabyło w okresie 3 lat ponad 31 ha gruntów przeznaczonych na realizację inwestycji drogowych. Odszkodowanie ustalane jest w drodze odrębnego postępowania w oparciu o operat szacunkowy okre

  13. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  14. Multifractal analysis of three-dimensional histogram from color images

    International Nuclear Information System (INIS)

    Chauveau, Julien; Rousseau, David; Richard, Paul; Chapeau-Blondeau, Francois

    2010-01-01

    Natural images, especially color or multicomponent images, are complex information-carrying signals. To contribute to the characterization of this complexity, we investigate the possibility of multiscale organization in the colorimetric structure of natural images. This is realized by means of a multifractal analysis applied to the three-dimensional histogram from natural color images. The observed behaviors are confronted to those of reference models with known multifractal properties. We use for this purpose synthetic random images with trivial monofractal behavior, and multidimensional multiplicative cascades known for their actual multifractal behavior. The behaviors observed on natural images exhibit similarities with those of the multifractal multiplicative cascades and display the signature of elaborate multiscale organizations stemming from the histograms of natural color images. This type of characterization of colorimetric properties can be helpful to various tasks of digital image processing, as for instance modeling, classification, indexing.

  15. The BIM representation for documentation and historical-critical analysis of the Modernist heritage

    Directory of Open Access Journals (Sweden)

    Marcello Balzani

    2016-06-01

    Full Text Available BIM - Building Information Modeling applied to the historical-critical analysis and representation of the architectural heritage is becoming more and more an essential means for the understand- ing of design processes and changes over time, as well as a basic instrument for knowledge and as an aid for the design of conservation, mainte- nance, restoration and enhancement projects. The paper analyses BIM applications to the Indi- an and Brazilian modernist heritage, through the works of the greatest architects of the period, in- vestigated through a detailed methodology, also representative, which opens to further research- es and reinterpretations. 

  16. Analysis of Factors Affecting Positron Emission Mammography (PEM) Image Formation

    International Nuclear Information System (INIS)

    Smith, Mark F.; Majewski, Stan; Weisenberger, Andrew G.; Kieper, Douglas A.; Raylman, Raymond R.; Turkington, Timothy G.

    2001-01-01

    Image reconstruction for positron emission mammography (PEM) with the breast positioned between two parallel, planar detectors is usually performed by backprojection to image planes. Three important factors affecting PEM image reconstruction by backprojection are investigated: (1) image uniformity (flood) corrections, (2) image sampling (pixel size) and (3) count allocation methods. An analytic expression for uniformity correction is developed that incorporates factors for spatial-dependent detector sensitivity and geometric effects from acceptance angle limits on coincidence events. There is good agreement between experimental floods from a PEM system with a pixellated detector and numerical simulations. The analytic uniformity corrections are successfully applied to image reconstruction of compressed breast phantoms and reduce the necessity for flood scans at different image planes. Experimental and simulated compressed breast phantom studies show that lesion contrast is improved when the image pixel size is half of, rather than equal to, the detector pixel size, though this occurs at the expense of some additional image noise. In PEM reconstruction counts usually are allocated to the pixel in the image plane intersected by the line of response (LOR) between the centers of the detection pixels. An alternate count allocation method is investigated that distributes counts to image pixels in proportion to the area of the tube of response (TOR) connecting the detection pixels that they overlay in the image plane. This TOR method eliminates some image artifacts that occur with the LOR method and increases tumor signal-to-noise ratios at the expense of a slight decrease in tumor contrast. Analysis of image uniformity, image sampling and count allocation methods in PEM image reconstruction points to ways of improving image formation. Further work is required to optimize image reconstruction parameters for particular detection or quantitation tasks

  17. Analysis of Factors Affecting Positron Emission Mammography (PEM) Image Formation

    Energy Technology Data Exchange (ETDEWEB)

    Mark F. Smith; Stan Majewski; Andrew G. Weisenberger; Douglas A. Kieper; Raymond R. Raylman; Timothy G. Turkington

    2001-11-01

    Image reconstruction for positron emission mammography (PEM) with the breast positioned between two parallel, planar detectors is usually performed by backprojection to image planes. Three important factors affecting PEM image reconstruction by backprojection are investigated: (1) image uniformity (flood) corrections, (2) image sampling (pixel size) and (3) count allocation methods. An analytic expression for uniformity correction is developed that incorporates factors for spatial-dependent detector sensitivity and geometric effects from acceptance angle limits on coincidence events. There is good agreement between experimental floods from a PEM system with a pixellated detector and numerical simulations. The analytic uniformity corrections are successfully applied to image reconstruction of compressed breast phantoms and reduce the necessity for flood scans at different image planes. Experimental and simulated compressed breast phantom studies show that lesion contrast is improved when the image pixel size is half of, rather than equal to, the detector pixel size, though this occurs at the expense of some additional image noise. In PEM reconstruction counts usually are allocated to the pixel in the image plane intersected by the line of response (LOR) between the centers of the detection pixels. An alternate count allocation method is investigated that distributes counts to image pixels in proportion to the area of the tube of response (TOR) connecting the detection pixels that they overlay in the image plane. This TOR method eliminates some image artifacts that occur with the LOR method and increases tumor signal-to-noise ratios at the expense of a slight decrease in tumor contrast. Analysis of image uniformity, image sampling and count allocation methods in PEM image reconstruction points to ways of improving image formation. Further work is required to optimize image reconstruction parameters for particular detection or quantitation tasks.

  18. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.

    Science.gov (United States)

    Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos

    2017-11-01

    The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.

  19. Medical image analysis of 3D CT images based on extensions of Haralick texture features

    Czech Academy of Sciences Publication Activity Database

    Tesař, Ludvík; Shimizu, A.; Smutek, D.; Kobatake, H.; Nawano, S.

    2008-01-01

    Roč. 32, č. 6 (2008), s. 513-520 ISSN 0895-6111 R&D Projects: GA AV ČR 1ET101050403; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * Gaussian mixture model * 3D image analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 1.192, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/tesar-medical image analysis of 3d ct image s based on extensions of haralick texture features.pdf

  20. Diagnostic imaging analysis of the impacted mesiodens

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Jeong Jun; Choi, Bo Ram; Jeong, Hwan Seok; Huh, Kyung Hoe; Yi, Won Jin; Heo, Min Suk; Lee, Sam Sun; Choi, Soon Chul [School of Dentistry, Seoul National University, Seoul (Korea, Republic of)

    2010-06-15

    The research was performed to predict the three dimensional relationship between the impacted mesiodens and the maxillary central incisors and the proximity with the anatomic structures by comparing their panoramic images with the CT images. Among the patients visiting Seoul National University Dental Hospital from April 2003 to July 2007, those with mesiodens were selected (154 mesiodens of 120 patients). The numbers, shapes, orientation and positional relationship of mesiodens with maxillary central incisors were investigated in the panoramic images. The proximity with the anatomical structures and complications were investigated in the CT images as well. The sex ratio (M : F) was 2.28 : 1 and the mean number of mesiodens per one patient was 1.28. Conical shape was 84.4% and inverted orientation was 51.9%. There were more cases of anatomical structures encroachment, especially on the nasal floor and nasopalatine duct, when the mesiodens was not superimposed with the central incisor. There were, however, many cases of the nasopalatine duct encroachment when the mesiodens was superimpoised with the apical 1/3 of central incisor (52.6%). Delayed eruption (55.6%), crown rotation (66.7%) and crown resorption (100%) were observed when the mesiodens was superimposed with the crown of the central incisor. It is possible to predict three dimensional relationship between the impacted mesiodens and the maxillary central incisors in the panoramic images, but more details should be confirmed by the CT images when necessary.

  1. Efficiency analysis of color image filtering

    Science.gov (United States)

    Fevralev, Dmitriy V.; Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Abramov, Sergey K.; Egiazarian, Karen O.; Astola, Jaakko T.

    2011-12-01

    This article addresses under which conditions filtering can visibly improve the image quality. The key points are the following. First, we analyze filtering efficiency for 25 test images, from the color image database TID2008. This database allows assessing filter efficiency for images corrupted by different noise types for several levels of noise variance. Second, the limit of filtering efficiency is determined for independent and identically distributed (i.i.d.) additive noise and compared to the output mean square error of state-of-the-art filters. Third, component-wise and vector denoising is studied, where the latter approach is demonstrated to be more efficient. Fourth, using of modern visual quality metrics, we determine that for which levels of i.i.d. and spatially correlated noise the noise in original images or residual noise and distortions because of filtering in output images are practically invisible. We also demonstrate that it is possible to roughly estimate whether or not the visual quality can clearly be improved by filtering.

  2. OSIRIS-REx Asteroid Sample Return Mission Image Analysis

    Science.gov (United States)

    Chevres Fernandez, Lee Roger; Bos, Brent

    2018-01-01

    NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding

  3. Three-dimensional temporal reconstruction and analysis of plume images

    Science.gov (United States)

    Dhawan, Atam P.; Disimile, Peter J.; Peck, Charles, III

    1992-01-01

    An experiment with two subsonic jets generating a cross-flow was conducted as part of a study of the structural features of temporal reconstruction of plume images. The flow field structure was made visible using a direct injection flow visualization technique. It is shown that image analysis and temporal three-dimensional visualization can provide new information on the vortical structural dynamics of multiple jets in a cross-flow. It is expected that future developments in image analysis, quantification and interpretation, and flow visualization of rocket engine plume images may provide a tool for correlating the engine diagnostic features by interpreting the evolution of the structures in the plume.

  4. Nursing care systems and complex thought in nursing education: document analysis

    Directory of Open Access Journals (Sweden)

    Josilaine Porfírio da Silva

    Full Text Available The aim of this study was to analyse the inclusion of the subject Nursing Care Systems (NCS in nursing education. This study was based on qualitative desk research and it was conducted in a nursing programme in southern Brazil that offers an integrated curriculum with NCS as a cross-cutting theme. Data were collected from September to December 2012, by examining 15 planning and development workbooks on the cross-disciplinary modules of the programme. Analysis was divided into four stages: exploratory, selective, analytic and interpretive reading. The adopted theoretical framework was Complex Thought of Edgar Morin, according to the principles of relevant knowledge. Results were arranged into two categories: NCS as a crosscutting theme in nursing education: the context, the global and the multidimensional; and strategies for teaching, learning and assessment of NCS: the complex. The study contributes to the debate on the importance of teaching NCS as a crosscutting theme in nursing education.

  5. Tracking and Analysis Framework (TAF) model documentation and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Bloyd, C.; Camp, J.; Conzelmann, G. [and others

    1996-12-01

    With passage of the 1990 Clean Air Act Amendments, the United States embarked on a policy for controlling acid deposition that has been estimated to cost at least $2 billion. Title IV of the Act created a major innovation in environmental regulation by introducing market-based incentives - specifically, by allowing electric utility companies to trade allowances to emit sulfur dioxide (SO{sub 2}). The National Acid Precipitation Assessment Program (NAPAP) has been tasked by Congress to assess what Senator Moynihan has termed this {open_quotes}grand experiment.{close_quotes} Such a comprehensive assessment of the economic and environmental effects of this legislation has been a major challenge. To help NAPAP face this challenge, the U.S. Department of Energy (DOE) has sponsored development of an integrated assessment model, known as the Tracking and Analysis Framework (TAF). This section summarizes TAF`s objectives and its overall design.

  6. Manned space flight nuclear system safety. Volume 3: Reactor system preliminary nuclear safety analysis. Part 1: Reference Design Document (RDD)

    Science.gov (United States)

    1972-01-01

    The Reference Design Document, of the Preliminary Safety Analysis Report (PSAR) - Reactor System provides the basic design and operations data used in the nuclear safety analysis of the Rector Power Module as applied to a Space Base program. A description of the power module systems, facilities, launch vehicle and mission operations, as defined in NASA Phase A Space Base studies is included. Each of two Zirconium Hydride Reactor Brayton power modules provides 50 kWe for the nominal 50 man Space Base. The INT-21 is the prime launch vehicle. Resupply to the 500 km orbit over the ten year mission is provided by the Space Shuttle. At the end of the power module lifetime (nominally five years), a reactor disposal system is deployed for boost into a 990 km high altitude (long decay time) earth orbit.

  7. Development of a systematic computer vision-based method to analyse and compare images of false identity documents for forensic intelligence purposes-Part I: Acquisition, calibration and validation issues.

    Science.gov (United States)

    Auberson, Marie; Baechler, Simon; Zasso, Michaël; Genessay, Thibault; Patiny, Luc; Esseiva, Pierre

    2016-03-01

    Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be

  8. Methods for processing and analysis functional and anatomical brain images: computerized tomography, emission tomography and nuclear resonance imaging

    International Nuclear Information System (INIS)

    Mazoyer, B.M.

    1988-01-01

    The various methods for brain image processing and analysis are presented and compared. The following topics are developed: the physical basis of brain image comparison (nature and formation of signals intrinsic performance of the methods image characteristics); mathematical methods for image processing and analysis (filtering, functional parameter extraction, morphological analysis, robotics and artificial intelligence); methods for anatomical localization (neuro-anatomy atlas, proportional stereotaxic atlas, numerized atlas); methodology of cerebral image superposition (normalization, retiming); image networks [fr

  9. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  10. Histology image analysis for carcinoma detection and grading.

    Science.gov (United States)

    He, Lei; Long, L Rodney; Antani, Sameer; Thoma, George R

    2012-09-01

    This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems. Published by Elsevier Ireland Ltd.

  11. Pattern recognition software and techniques for biological image analysis.

    Directory of Open Access Journals (Sweden)

    Lior Shamir

    2010-11-01

    Full Text Available The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  12. Identifying radiotherapy target volumes in brain cancer by image analysis.

    Science.gov (United States)

    Cheng, Kun; Montgomery, Dean; Feng, Yang; Steel, Robin; Liao, Hanqing; McLaren, Duncan B; Erridge, Sara C; McLaughlin, Stephen; Nailon, William H

    2015-10-01

    To establish the optimal radiotherapy fields for treating brain cancer patients, the tumour volume is often outlined on magnetic resonance (MR) images, where the tumour is clearly visible, and mapped onto computerised tomography images used for radiotherapy planning. This process requires considerable clinical experience and is time consuming, which will continue to increase as more complex image sequences are used in this process. Here, the potential of image analysis techniques for automatically identifying the radiation target volume on MR images, and thereby assisting clinicians with this difficult task, was investigated. A gradient-based level set approach was applied on the MR images of five patients with grades II, III and IV malignant cerebral glioma. The relationship between the target volumes produced by image analysis and those produced by a radiation oncologist was also investigated. The contours produced by image analysis were compared with the contours produced by an oncologist and used for treatment. In 93% of cases, the Dice similarity coefficient was found to be between 60 and 80%. This feasibility study demonstrates that image analysis has the potential for automatic outlining in the management of brain cancer patients, however, more testing and validation on a much larger patient cohort is required.

  13. [Media and drugs: a documental analysis of the Brazilian writing media between 1999 and 2003].

    Science.gov (United States)

    Ronzani, Telmo Mota; Fernandes, Ameli Gabriele Batista; Gebara, Carla Ferreira de Paula; Oliveira, Samia Abreu; Scoralick, Natália Nunes; Lourenço, Lélio Moura

    2009-01-01

    This paper aims to analyze the kind of information published by the Brazilian 'written media' about drugs. It was examined articles about drugs in a national circulation magazine between 1999 and 2003, through an analysis of content. A total of 481 articles were found. 'Consumption' was the most appeared topic. The most quoted drugs were: cocaine (21%), marijuana (19%), alcoholic beverages (12%) and cigarettes (12%). This research also showed that 57% of the articles were related to cigarettes, its harmful effects, whereas alcohol had the same amount of articles showing it as a good or a bad substance for the human being and considered the most addictive drug (23%). On the other hand, cocaine was related to drug dealing (30%). In general, cocaine and marijuana were in focus in the media while alcohol and solvents had less prominence considering the epidemiologic data of use. We can notice that there is an incompatibility between the media focus and the profile of drug consumption in Brazil, that could influence the person's beliefs about certain substances and public politics about drugs in Brazil.

  14. Uncooled LWIR imaging: applications and market analysis

    Science.gov (United States)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  15. QA for test and analysis, documentation, recovery of data after test

    International Nuclear Information System (INIS)

    Zola, Maurizio

    2001-01-01

    Quality assurance for test and analysis implies the following qualification: the generation and maintenance of evidence to ensure that the equipment will operate on demand to meet the system performance requirements. (IEC 780-1984). This is presented through Standards and guidelines. The purpose of ISO 9000:2000, Quality management systems - Fundamentals and vocabulary is to establish a starting point for understanding the standards and defines the fundamental terms and definitions used in the ISO 9000 family. ISO 9001:2000, Quality management systems - Requirements, is the requirement standard to be used to assess ability to meet customer and applicable regulatory requirements and thereby address customer satisfaction. It is now the only standard in the ISO 9000 family against which third-party certification can be carried. ISO 9004:2000, Quality management systems - Guidelines for performance improvements provides guidance for continual improvement of your quality management system to benefit all parties through sustained customer satisfaction. ISO 9001:2000 specifies requirements for a quality management system for any organization that needs to demonstrate its ability to consistently provide product that meets customer and applicable regulatory requirements and aims to enhance customer satisfaction. ISO 9001:2000 is used if you are seeking to establish a management system that provides confidence in the conformance of your product to established or specified requirements. It is suggested that, beginning with ISO 9000:2000, you adopt ISO 9001:2000 to achieve a first level of performance. The practices described in ISO 9004:2000 may then be implemented to make your quality management system increasingly effective in achieving your own business goals. ISO 9004:2000 is used to extend the benefits obtained from ISO 9001:2000 to all parties that are interested. Using standards in this way will enable relation to other management systems

  16. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  17. Analysis of live cell images: Methods, tools and opportunities.

    Science.gov (United States)

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.

  18. Advanced spectral imaging for noninvasive microanalysis of cultural heritage materials: review of application to documents in the U.S. Library of Congress.

    Science.gov (United States)

    France, Fenella G

    2011-06-01

    Hyperspectral imaging was originally developed for remote sensing and astronomical applications, but adaptations of this technology have been of great benefit to the preservation of cultural heritage. Developments in noninvasive analytical techniques have advanced the preservation of cultural heritage materials by enabling the identification and analysis of a range of materials, utilizing their unique spectral response to nondestructively determine chemical composition, and determining states of deterioration and change due to environmental conditions. When used as a tool for noninvasive characterization of cultural heritage, these spectral imaging systems allow the collection of chemical identification information about materials without sampling, which is a critical factor for cultural heritage materials. The United States Library of Congress has been developing the application of hyperspectral imaging to the preservation and analysis of cultural heritage materials as a powerful noncontact technique. It allows noninvasive characterization of materials, by identifying and characterizing colorants, inks, and substrates with narrow-band illumination to protect the object while also monitoring deterioration or changes due to exhibit and other environmental conditions. Contiguous illumination from the ultraviolet, visible, and infrared spectral regions allows the capture of lost, obscured, and deteriorated information. The resulting image cube allows greater capabilities for mapping and coordinating a range of complementary chemical and spectral analyses. The capabilities of this technique are illustrated by a review of results from analysis of the Waldseemüller World Map, the L'Enfant plan for Washington, D.C., and the first draft of the U.S. Declaration of Independence.

  19. Applications of Digital Image Analysis in Experimental Mechanics

    DEFF Research Database (Denmark)

    Lyngbye, J. : Ph.D.

    The present thesis "Application of Digital Image Analysis in Experimental Mechanics" has been prepared as a part of Janus Lyngbyes Ph.D. study during the period December 1988 to June 1992 at the Department of Building technology and Structural Engineering, University of Aalborg, Denmark....... In this thesis attention will be focused on optimal use and analysis of the information of digital images. This is realized during investigation and application of parametric methods in digital image analysis. The parametric methods will be implemented in applications representative for the area of experimental...

  20. Analysis of PETT images in psychiatric disorders

    Energy Technology Data Exchange (ETDEWEB)

    Brodie, J.D.; Gomez-Mont, F.; Volkow, N.D.; Corona, J.F.; Wolf, A.P.; Wolkin, A.; Russell, J.A.G.; Christman, D.; Jaeger, J.

    1983-01-01

    A quantitative method is presented for studying the pattern of metabolic activity in a set of Positron Emission Transaxial Tomography (PETT) images. Using complex Fourier coefficients as a feature vector for each image, cluster, principal components, and discriminant function analyses are used to empirically describe metabolic differences between control subjects and patients with DSM III diagnosis for schizophrenia or endogenous depression. We also present data on the effects of neuroleptic treatment on the local cerebral metabolic rate of glucose utilization (LCMRGI) in a group of chronic schizophrenics using the region of interest approach. 15 references, 4 figures, 3 tables.