WorldWideScience

Sample records for validates image processing

  1. Process validation for radiation processing

    International Nuclear Information System (INIS)

    Miller, A.

    1999-01-01

    Process validation concerns the establishment of the irradiation conditions that will lead to the desired changes of the irradiated product. Process validation therefore establishes the link between absorbed dose and the characteristics of the product, such as degree of crosslinking in a polyethylene tube, prolongation of shelf life of a food product, or degree of sterility of the medical device. Detailed international standards are written for the documentation of radiation sterilization, such as EN 552 and ISO 11137, and the steps of process validation that are described in these standards are discussed in this paper. They include material testing for the documentation of the correct functioning of the product, microbiological testing for selection of the minimum required dose and dose mapping for documentation of attainment of the required dose in all parts of the product. The process validation must be maintained by reviews and repeated measurements as necessary. This paper presents recommendations and guidance for the execution of these components of process validation. (author)

  2. PC image processing

    International Nuclear Information System (INIS)

    Hwa, Mok Jin Il; Am, Ha Jeng Ung

    1995-04-01

    This book starts summary of digital image processing and personal computer, and classification of personal computer image processing system, digital image processing, development of personal computer and image processing, image processing system, basic method of image processing such as color image processing and video processing, software and interface, computer graphics, video image and video processing application cases on image processing like satellite image processing, color transformation of image processing in high speed and portrait work system.

  3. Validation Process Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); English, Christine M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gesick, Joshua C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mukkamala, Saikrishna [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2018-01-04

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  4. Talking Back to the Media Ideal: The Development and Validation of the Critical Processing of Beauty Images Scale

    Science.gov (United States)

    Engeln-Maddox, Renee; Miller, Steven A.

    2008-01-01

    This article details the development of the Critical Processing of Beauty Images Scale (CPBI) and studies demonstrating the psychometric soundness of this measure. The CPBI measures women's tendency to engage in critical processing of media images featuring idealized female beauty. Three subscales were identified using exploratory factor analysis…

  5. TU-FG-209-11: Validation of a Channelized Hotelling Observer to Optimize Chest Radiography Image Processing for Nodule Detection: A Human Observer Study

    International Nuclear Information System (INIS)

    Sanchez, A; Little, K; Chung, J; Lu, ZF; MacMahon, H; Reiser, I

    2016-01-01

    Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidence of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more

  6. Markov Processes in Image Processing

    Science.gov (United States)

    Petrov, E. P.; Kharina, N. L.

    2018-05-01

    Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.

  7. Image perception and image processing

    Energy Technology Data Exchange (ETDEWEB)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures.

  8. Image perception and image processing

    International Nuclear Information System (INIS)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures. (orig.) [de

  9. Validation of radiation sterilization process

    International Nuclear Information System (INIS)

    Kaluska, I.

    2007-01-01

    The standards for quality management systems recognize that, for certain processes used in manufacturing, the effectiveness of the process cannot be fully verified by subsequent inspection and testing of the product. Sterilization is an example of such a process. For this reason, sterilization processes are validated for use, the performance of sterilization process is monitored routinely and the equipment is maintained according to ISO 13 485. Different aspects of this norm are presented

  10. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  11. A critical evaluation of validity and utility of translational imaging in pain and analgesia: Utilizing functional imaging to enhance the process.

    Science.gov (United States)

    Upadhyay, Jaymin; Geber, Christian; Hargreaves, Richard; Birklein, Frank; Borsook, David

    2018-01-01

    Assessing clinical pain and metrics related to function or quality of life predominantly relies on patient reported subjective measures. These outcome measures are generally not applicable to the preclinical setting where early signs pointing to analgesic value of a therapy are sought, thus introducing difficulties in animal to human translation in pain research. Evaluating brain function in patients and respective animal model(s) has the potential to characterize mechanisms associated with pain or pain-related phenotypes and thereby provide a means of laboratory to clinic translation. This review summarizes the progress made towards understanding of brain function in clinical and preclinical pain states elucidated using an imaging approach as well as the current level of validity of translational pain imaging. We hypothesize that neuroimaging can describe the central representation of pain or pain phenotypes and yields a basis for the development and selection of clinically relevant animal assays. This approach may increase the probability of finding meaningful new analgesics that can help satisfy the significant unmet medical needs of patients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Validating the TOD method with identification of real targets : Effects of aspect angle, dynamic imaging and signal processing

    NARCIS (Netherlands)

    Beintema, J.A.; Hogervorst, M.A.; Dijk, J.; Bijl, P.

    2008-01-01

    How far the eye reaches via camera is quantifiable and modeled with the TOD (triangle orientation discrimination) methodology (Bijl and Hogervorst, 2008 Perception 37 Supplement, this issue). For validation and extension purposes, we studied identification of real objects in several conditions.

  13. Processing of medical images

    International Nuclear Information System (INIS)

    Restrepo, A.

    1998-01-01

    Thanks to the innovations in the technology for the processing of medical images, to the high development of better and cheaper computers, and, additionally, to the advances in the systems of communications of medical images, the acquisition, storage and handling of digital images has acquired great importance in all the branches of the medicine. It is sought in this article to introduce some fundamental ideas of prosecution of digital images that include such aspects as their representation, storage, improvement, visualization and understanding

  14. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    Completely self-contained-and heavily illustrated-this introduction to basic concepts and methodologies for digital image processing is written at a level that truly is suitable for seniors and first...

  15. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    ...-year graduate students in almost any technical discipline. The leading textbook in its field for more than twenty years, it continues its cutting-edge focus on contemporary developments in all mainstream areas of image processing-e.g...

  16. Medical image processing

    CERN Document Server

    Dougherty, Geoff

    2011-01-01

    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  17. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin

    2011-01-01

    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  18. Methods in Astronomical Image Processing

    Science.gov (United States)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  19. Image processing in radiology

    International Nuclear Information System (INIS)

    Dammann, F.

    2002-01-01

    Medical imaging processing and analysis methods have significantly improved during recent years and are now being increasingly used in clinical applications. Preprocessing algorithms are used to influence image contrast and noise. Three-dimensional visualization techniques including volume rendering and virtual endoscopy are increasingly available to evaluate sectional imaging data sets. Registration techniques have been developed to merge different examination modalities. Structures of interest can be extracted from the image data sets by various segmentation methods. Segmented structures are used for automated quantification analysis as well as for three-dimensional therapy planning, simulation and intervention guidance, including medical modelling, virtual reality environments, surgical robots and navigation systems. These newly developed methods require specialized skills for the production and postprocessing of radiological imaging data as well as new definitions of the roles of the traditional specialities. The aim of this article is to give an overview of the state-of-the-art of medical imaging processing methods, practical implications for the ragiologist's daily work and future aspects. (orig.) [de

  20. Satellite imager calibration and validation

    CSIR Research Space (South Africa)

    Vhengani, L

    2010-10-01

    Full Text Available and Validation Lufuno Vhengani*, Minette Lubbe, Derek Griffith and Meena Lysko Council for Scientific and Industrial Research, Defence Peace Safety and Security, Pretoria, South Africa E-mail: * lvhengani@csir.co.za Abstract: The success or failure... techniques specific to South Africa. 1. Introduction The success or failure of any earth observation mission depends on the quality of its data. To achieve optimum levels of reliability most sensors are calibrated pre-launch. However...

  1. Processing Of Binary Images

    Science.gov (United States)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  2. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo

    2016-01-01

    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  3. Introduction to computer image processing

    Science.gov (United States)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  4. Introduction to digital image processing

    CERN Document Server

    Pratt, William K

    2013-01-01

    CONTINUOUS IMAGE CHARACTERIZATION Continuous Image Mathematical Characterization Image RepresentationTwo-Dimensional SystemsTwo-Dimensional Fourier TransformImage Stochastic CharacterizationPsychophysical Vision Properties Light PerceptionEye PhysiologyVisual PhenomenaMonochrome Vision ModelColor Vision ModelPhotometry and ColorimetryPhotometryColor MatchingColorimetry ConceptsColor SpacesDIGITAL IMAGE CHARACTERIZATION Image Sampling and Reconstruction Image Sampling and Reconstruction ConceptsMonochrome Image Sampling SystemsMonochrome Image Reconstruction SystemsColor Image Sampling SystemsImage QuantizationScalar QuantizationProcessing Quantized VariablesMonochrome and Color Image QuantizationDISCRETE TWO-DIMENSIONAL LINEAR PROCESSING Discrete Image Mathematical Characterization Vector-Space Image RepresentationGeneralized Two-Dimensional Linear OperatorImage Statistical CharacterizationImage Probability Density ModelsLinear Operator Statistical RepresentationSuperposition and ConvolutionFinite-Area Superp...

  5. A Dirichlet process mixture model for automatic (18)F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions.

    Science.gov (United States)

    Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco

    2016-05-01

    The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to

  6. A Dirichlet process mixture model for automatic {sup 18}F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions

    Energy Technology Data Exchange (ETDEWEB)

    Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo [Medical Physics Unit, University Hospital of Verona, P.le Stefani 1, Verona 37126 (Italy); Mazzarotto, Renzo [Radiation Oncology Unit, University Hospital of Verona, P.le Stefani 1, Verona 37126 (Italy); Ferdeghini, Marco [Nuclear Medicine Unit, University Hospital of Verona, P.le Stefani 1, Verona 37126 (Italy)

    2016-05-15

    Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a

  7. A Dirichlet process mixture model for automatic 18F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions

    International Nuclear Information System (INIS)

    Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco

    2016-01-01

    Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on 18 F-fluorodeoxyglucose positron emission tomography ( 18 F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve

  8. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  9. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  10. Developing and validating a psychometric scale for image quality assessment

    International Nuclear Information System (INIS)

    Mraity, H.; England, A.; Hogg, P.

    2014-01-01

    Purpose: Using AP pelvis as a catalyst, this paper explains how a psychometric scale for image quality assessment can be created using Bandura's theory for self-efficacy. Background: Establishing an accurate diagnosis is highly dependent upon the quality of the radiographic image. Image quality, as a construct (i.e. set of attributes that makes up the image quality), continues to play an essential role in the field of diagnostic radiography. The process of assessing image quality can be facilitated by using criteria, such as the European Commission (EC) guidelines for quality criteria as published in 1996. However, with the advent of new technology (Computed Radiography and Digital Radiography), some of the EC criteria may no longer be suitable for assessing the visual quality of a digital radiographic image. Moreover, a lack of validated visual image quality scales in the literature can also lead to significant variations in image quality evaluation. Creating and validating visual image quality scales, using a robust methodology, could reduce variability and improve the validity and reliability of perceptual image quality evaluations

  11. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  12. Image processing and recognition for biological images.

    Science.gov (United States)

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  13. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier

    2013-01-01

    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  14. Processing Visual Images

    International Nuclear Information System (INIS)

    Litke, Alan

    2006-01-01

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  15. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R

    1996-01-01

    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  16. REMOTE SENSING IMAGE QUALITY ASSESSMENT EXPERIMENT WITH POST-PROCESSING

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2018-04-01

    Full Text Available This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  17. GPM GROUND VALIDATION PRECIPITATION VIDEO IMAGER (PVI) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Precipitation Video Imager (PVI) GCPEx dataset collected precipitation particle images and drop size distribution data from November 2011...

  18. Fingerprint image enhancement by differential hysteresis processing.

    Science.gov (United States)

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  19. Trends in medical image processing

    International Nuclear Information System (INIS)

    Robilotta, C.C.

    1987-01-01

    The function of medical image processing is analysed, mentioning the developments, the physical agents, and the main categories, as conection of distortion in image formation, detectability increase, parameters quantification, etc. (C.G.C.) [pt

  20. Methods of digital image processing

    International Nuclear Information System (INIS)

    Doeler, W.

    1985-01-01

    Increasing use of computerized methods for diagnostical imaging of radiological problems will open up a wide field of applications for digital image processing. The requirements set by routine diagnostics in medical radiology point to picture data storage and documentation and communication as the main points of interest for application of digital image processing. As to the purely radiological problems, the value of digital image processing is to be sought in the improved interpretability of the image information in those cases where the expert's experience and image interpretation by human visual capacities do not suffice. There are many other domains of imaging in medical physics where digital image processing and evaluation is very useful. The paper reviews the various methods available for a variety of problem solutions, and explains the hardware available for the tasks discussed. (orig.) [de

  1. Image processing system for flow pattern measurements

    International Nuclear Information System (INIS)

    Ushijima, Satoru; Miyanaga, Yoichi; Takeda, Hirofumi

    1989-01-01

    This paper describes the development and application of an image processing system for measurements of flow patterns occuring in natural circulation water flows. In this method, the motions of particles scattered in the flow are visualized by a laser light slit and they are recorded on normal video tapes. These image data are converted to digital data with an image processor and then transfered to a large computer. The center points and pathlines of the particle images are numerically analized, and velocity vectors are obtained with these results. In this image processing system, velocity vectors in a vertical plane are measured simultaneously, so that the two dimensional behaviors of various eddies, with low velocity and complicated flow patterns usually observed in natural circulation flows, can be determined almost quantitatively. The measured flow patterns, which were obtained from natural circulation flow experiments, agreed with photographs of the particle movements, and the validity of this measuring system was confirmed in this study. (author)

  2. Industrial Applications of Image Processing

    Science.gov (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  3. [Imaging center - optimization of the imaging process].

    Science.gov (United States)

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Building country image process

    Directory of Open Access Journals (Sweden)

    Zubović Jovan

    2005-01-01

    Full Text Available The same branding principles are used for countries as they are used for the products, only the methods are different. Countries are competing among themselves in tourism, foreign investments and exports. Country turnover is at the level that the country's reputation is. The countries that begin as unknown or with a bad image will have limits in operations or they will be marginalized. As a result they will be at the bottom of the international influence scale. On the other hand, countries with a good image, like Germany (despite two world wars will have their products covered with a special "aura".

  5. Digital Data Processing of Images

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  6. Image fusion tool: Validation by phantom measurements

    International Nuclear Information System (INIS)

    Zander, A.; Geworski, L.; Richter, M.; Ivancevic, V.; Munz, D.L.; Muehler, M.; Ditt, H.

    2002-01-01

    Aim: Validation of a new image fusion tool with regard to handling, application in a clinical environment and fusion precision under different acquisition and registration settings. Methods: The image fusion tool investigated allows fusion of imaging modalities such as PET, CT, MRI. In order to investigate fusion precision, PET and MRI measurements were performed using a cylinder and a body contour-shaped phantom. The cylinder phantom (diameter and length 20 cm each) contained spheres (10 to 40 mm in diameter) which represented 'cold' or 'hot' lesions in PET measurements. The body contour-shaped phantom was equipped with a heart model containing two 'cold' lesions. Measurements were done with and without four external markers placed on the phantoms. The markers were made of plexiglass (2 cm diameter and 1 cm thickness) and contained a Ga-Ge-68 core for PET and Vitamin E for MRI measurements. Comparison of fusion results with and without markers was done visually and by computer assistance. This algorithm was applied to the different fusion parameters and phantoms. Results: Image fusion of PET and MRI data without external markers yielded a measured error of 0 resulting in a shift at the matrix border of 1.5 mm. Conclusion: The image fusion tool investigated allows a precise fusion of PET and MRI data with a translation error acceptable for clinical use. The error is further minimized by using external markers, especially in the case of missing anatomical orientation. Using PET the registration error depends almost only on the low resolution of the data

  7. Microprocessor based image processing system

    International Nuclear Information System (INIS)

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  8. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  9. TECHNOLOGIES OF BRAIN IMAGES PROCESSING

    Directory of Open Access Journals (Sweden)

    O.M. Klyuchko

    2017-12-01

    Full Text Available The purpose of present research was to analyze modern methods of processing biological images implemented before storage in databases for biotechnological purposes. The databases further were incorporated into web-based digital systems. Examples of such information systems were described in the work for two levels of biological material organization; databases for storing data of histological analysis and of whole brain were described. Methods of neuroimaging processing for electronic brain atlas were considered. It was shown that certain pathological features can be revealed in histological image processing. Several medical diagnostic techniques (for certain brain pathologies, etc. as well as a few biotechnological methods are based on such effects. Algorithms of image processing were suggested. Electronic brain atlas was conveniently for professionals in different fields described in details. Approaches of brain atlas elaboration, “composite” scheme for large deformations as well as several methods of mathematic images processing were described as well.

  10. Image Processing: Some Challenging Problems

    Science.gov (United States)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  11. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  12. Image restoration and processing methods

    International Nuclear Information System (INIS)

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  13. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... multiple imaging setups. This makes the system well suited for development of new processing methods and for clinical evaluations, where acquisition of the exact same scan location for multiple methods is important. The second project addressed implementation, development and evaluation of SASB using...... methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate...

  14. Invitation to medical image processing

    International Nuclear Information System (INIS)

    Kitasaka, Takayuki; Suenaga, Yasuhito; Mori, Kensaku

    2010-01-01

    This medical essay explains the present state of CT image processing technology about its recognition, acquisition and visualization for computer-assisted diagnosis (CAD) and surgery (CAS), and future view. Medical image processing has a series of history of its original start from the discovery of X-ray to its application to diagnostic radiography, its combination with the computer for CT, multi-detector raw CT, leading to 3D/4D images for CAD and CAS. CAD is performed based on the recognition of normal anatomical structure of human body, detection of possible abnormal lesion and visualization of its numerical figure into image. Actual instances of CAD images are presented here for chest (lung cancer), abdomen (colorectal cancer) and future body atlas (models of organs and diseases for imaging), a recent national project: computer anatomy. CAS involves the surgical planning technology based on 3D images, navigation of the actual procedure and of endoscopy. As guidance to beginning technological image processing, described are the national and international community like related academic societies, regularly conducting congresses, textbooks and workshops, and topics in the field like computed anatomy of an individual patient for CAD and CAS, its data security and standardization. In future, protective medicine is in authors' view based on the imaging technology, e.g., daily life CAD of individuals ultimately, as exemplified in the present body thermometer and home sphygmometer, to monitor one's routine physical conditions. (T.T.)

  15. Streamlining Compliance Validation Through Automation Processes

    Science.gov (United States)

    2014-03-01

    INTENTIONALLY LEFT BLANK xv LIST OF ACRONYMS AND ABBREVIATIONS ACAS Assured Compliance Assessment Suite AMP Apache- MySQL -PHP ANSI American...enemy. Of course , a common standard for DoD security personnel to write and share compliance validation content would prevent duplicate work and aid in...process and consume much of the SCAP content available. Finally, it is free and easy to install as part of the Apache/ MySQL /PHP (AMP) [37

  16. Twofold processing for denoising ultrasound medical images.

    Science.gov (United States)

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  17. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    Science.gov (United States)

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  18. Differential morphology and image processing.

    Science.gov (United States)

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  19. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  20. Digital processing of radiographic images

    Science.gov (United States)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  1. Crack detection using image processing

    International Nuclear Information System (INIS)

    Moustafa, M.A.A

    2010-01-01

    This thesis contains five main subjects in eight chapters and two appendices. The first subject discus Wiener filter for filtering images. In the second subject, we examine using different methods, as Steepest Descent Algorithm (SDA) and the Wavelet Transformation, to detect and filling the cracks, and it's applications in different areas as Nano technology and Bio-technology. In third subject, we attempt to find 3-D images from 1-D or 2-D images using texture mapping with Open Gl under Visual C ++ language programming. The fourth subject consists of the process of using the image warping methods for finding the depth of 2-D images using affine transformation, bilinear transformation, projective mapping, Mosaic warping and similarity transformation. More details about this subject will be discussed below. The fifth subject, the Bezier curves and surface, will be discussed in details. The methods for creating Bezier curves and surface with unknown distribution, using only control points. At the end of our discussion we will obtain the solid form, using the so called NURBS (Non-Uniform Rational B-Spline); which depends on: the degree of freedom, control points, knots, and an evaluation rule; and is defined as a mathematical representation of 3-D geometry that can accurately describe any shape from a simple 2-D line, circle, arc, or curve to the most complex 3-D organic free-form surface or (solid) which depends on finding the Bezier curve and creating family of curves (surface), then filling in between to obtain the solid form. Another representation for this subject is concerned with building 3D geometric models from physical objects using image-based techniques. The advantage of image techniques is that they require no expensive equipment; we use NURBS, subdivision surface and mesh for finding the depth of any image with one still view or 2D image. The quality of filtering depends on the way the data is incorporated into the model. The data should be treated with

  2. FITS Liberator: Image processing software

    Science.gov (United States)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  3. Validation of measured friction by process tests

    DEFF Research Database (Denmark)

    Eriksen, Morten; Henningsen, Poul; Tan, Xincai

    The objective of sub-task 3.3 is to evaluate under actual process conditions the friction formulations determined by simulative testing. As regards task 3.3 the following tests have been used according to the original project plan: 1. standard ring test and 2. double cup extrusion test. The task...... has, however, been extended to include a number of new developed process tests: 3. forward rod extrusion test, 4. special ring test at low normal pressure, 5. spike test (especially developed for warm and hot forging). Validation of the measured friction values in cold forming from sub-task 3.1 has...... been made with forward rod extrusion, and very good agreement was obtained between the measured friction values in simulative testing and process testing....

  4. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  5. Linear Algebra and Image Processing

    Science.gov (United States)

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  6. Mathematical problems in image processing

    International Nuclear Information System (INIS)

    Chidume, C.E.

    2000-01-01

    This is the second volume of a new series of lecture notes of the Abdus Salam International Centre for Theoretical Physics. This volume contains the lecture notes given by A. Chambolle during the School on Mathematical Problems in Image Processing. The school consisted of two weeks of lecture courses and one week of conference

  7. Motion-compensated processing of image signals

    NARCIS (Netherlands)

    2010-01-01

    In a motion-compensated processing of images, input images are down-scaled (scl) to obtain down-scaled images, the down-scaled images are subjected to motion- compensated processing (ME UPC) to obtain motion-compensated images, the motion- compensated images are up-scaled (sc2) to obtain up-scaled

  8. Musashi dynamic image processing system

    International Nuclear Information System (INIS)

    Murata, Yutaka; Mochiki, Koh-ichi; Taguchi, Akira

    1992-01-01

    In order to produce transmitted neutron dynamic images using neutron radiography, a real time system called Musashi dynamic image processing system (MDIPS) was developed to collect, process, display and record image data. The block diagram of the MDIPS is shown. The system consists of a highly sensitive, high resolution TV camera driven by a custom-made scanner, a TV camera deflection controller for optimal scanning, which adjusts to the luminous intensity and the moving speed of an object, a real-time corrector to perform the real time correction of dark current, shading distortion and field intensity fluctuation, a real time filter for increasing the image signal to noise ratio, a video recording unit and a pseudocolor monitor to realize recording in commercially available products and monitoring by means of the CRTs in standard TV scanning, respectively. The TV camera and the TV camera deflection controller utilized for producing still images can be applied to this case. The block diagram of the real-time corrector is shown. Its performance is explained. Linear filters and ranked order filters were developed. (K.I.)

  9. using fuzzy logic in image processing

    International Nuclear Information System (INIS)

    Ashabrawy, M.A.F.

    2002-01-01

    due to the unavoidable merge between computer and mathematics, the signal processing in general and the processing in particular have greatly improved and advanced. signal processing deals with the processing of any signal data for use by a computer, while image processing deals with all kinds of images (just images). image processing involves the manipulation of image data for better appearance and viewing by people; consequently, it is a rapidly growing and exciting field to be involved in today . this work takes an applications - oriented approach to image processing .the applications; the maps and documents of the first egyptian research reactor (ETRR-1), the x-ray medical images and the fingerprints image. since filters, generally, work continuous ranges rather than discrete values, fuzzy logic techniques are more convenient.thee techniques are powerful in image processing and can deal with one- dimensional, 1-D and two - dimensional images, 2-D images as well

  10. Solar Energy Potential Assessment on Rooftops and Facades in Large Built Environments Based on LiDAR Data, Image Processing, and Cloud Computing. Methodological Background, Application, and Validation in Geneva (Solar Cadaster

    Directory of Open Access Journals (Sweden)

    Gilles Desthieux

    2018-03-01

    Full Text Available The paper presents the core methodology for assessing solar radiation and energy production on building rooftops and vertical facades (still rarely considered of the inner-city. This integrated tool is based on the use of LiDAR, 2D and 3D cadastral data. Together with solar radiation and astronomical models, it calculates the global irradiance for a set of points located on roofs, ground, and facades. Although the tool takes simultaneously roofs, ground, and facades, different methods of shadow casting are applied. Shadow casting on rooftops is based on image processing techniques. On the other hand, the assessment on facade involves first to create and interpolate points along the facades and then to implement a point-by-point shadow casting routine. The paper is structured in five parts: (i state of the art on the use of 3D GIS and automated processes in assessing solar radiation in the built environment, (ii overview on the methodological framework used in the paper, (iii detailed presentation of the method proposed for solar modeling and shadow casting, in particular by introducing an innovative approach for modeling the sky view factor (SVF, (iv demonstration of the solar model introduced in this paper through applications in Geneva’s building roofs (solar cadaster and facades, (v validation of the solar model in some Geneva’s spots, focusing especially on two distinct comparisons: solar model versus fisheye catchments on partially inclined surfaces (roof component; solar model versus photovoltaic simulation tool PVSyst on vertical surfaces (facades. Concerning the roof component, validation results emphasize global sensitivity related to the density of light sources on the sky vault to model the SVF. The low dense sky model with 145 light sources gives satisfying results, especially when processing solar cadasters in large urban areas, thus allowing to save computation time. In the case of building facades, introducing weighting factor

  11. IV&V Project Assessment Process Validation

    Science.gov (United States)

    Driskell, Stephen

    2012-01-01

    The Space Launch System (SLS) will launch NASA's Multi-Purpose Crew Vehicle (MPCV). This launch vehicle will provide American launch capability for human exploration and travelling beyond Earth orbit. SLS is designed to be flexible for crew or cargo missions. The first test flight is scheduled for December 2017. The SLS SRR/SDR provided insight into the project development life cycle. NASA IV&V ran the standard Risk Based Assessment and Portfolio Based Risk Assessment to identify analysis tasking for the SLS program. This presentation examines the SLS System Requirements Review/System Definition Review (SRR/SDR), IV&V findings for IV&V process validation correlation to/from the selected IV&V tasking and capabilities. It also provides a reusable IEEE 1012 scorecard for programmatic completeness across the software development life cycle.

  12. Image processing with ImageJ

    NARCIS (Netherlands)

    Abramoff, M.D.; Magalhães, Paulo J.; Ram, Sunanda J.

    2004-01-01

    Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS).

  13. Spent Nuclear Fuel (SNF) Process Validation Technical Support Plan

    Energy Technology Data Exchange (ETDEWEB)

    SEXTON, R.A.

    2000-03-13

    The purpose of Process Validation is to confirm that nominal process operations are consistent with the expected process envelope. The Process Validation activities described in this document are not part of the safety basis, but are expected to demonstrate that the process operates well within the safety basis. Some adjustments to the process may be made as a result of information gathered in Process Validation.

  14. Spent Nuclear Fuel (SNF) Process Validation Technical Support Plan

    International Nuclear Information System (INIS)

    SEXTON, R.A.

    2000-01-01

    The purpose of Process Validation is to confirm that nominal process operations are consistent with the expected process envelope. The Process Validation activities described in this document are not part of the safety basis, but are expected to demonstrate that the process operates well within the safety basis. Some adjustments to the process may be made as a result of information gathered in Process Validation

  15. Image quality dependence on image processing software in ...

    African Journals Online (AJOL)

    Image quality dependence on image processing software in computed radiography. ... Agfa CR readers use MUSICA software, and an upgrade with significantly different image ... Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  16. Fast processing of foreign fiber images by image blocking

    OpenAIRE

    Yutao Wu; Daoliang Li; Zhenbo Li; Wenzhu Yang

    2014-01-01

    In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extra...

  17. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu

    2014-08-01

    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  18. Local figure-ground cues are valid for natural images.

    Science.gov (United States)

    Fowlkes, Charless C; Martin, David R; Malik, Jitendra

    2007-06-08

    Figure-ground organization refers to the visual perception that a contour separating two regions belongs to one of the regions. Recent studies have found neural correlates of figure-ground assignment in V2 as early as 10-25 ms after response onset, providing strong support for the role of local bottom-up processing. How much information about figure-ground assignment is available from locally computed cues? Using a large collection of natural images, in which neighboring regions were assigned a figure-ground relation by human observers, we quantified the extent to which figural regions locally tend to be smaller, more convex, and lie below ground regions. Our results suggest that these Gestalt cues are ecologically valid, and we quantify their relative power. We have also developed a simple bottom-up computational model of figure-ground assignment that takes image contours as input. Using parameters fit to natural image statistics, the model is capable of matching human-level performance when scene context limited.

  19. Biomedical signal and image processing.

    Science.gov (United States)

    Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro

    2011-01-01

    Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly [1], [2]. This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.

  20. Basic strategies for valid cytometry using image analysis

    NARCIS (Netherlands)

    Jonker, A.; Geerts, W. J.; Chieco, P.; Moorman, A. F.; Lamers, W. H.; van Noorden, C. J.

    1997-01-01

    The present review provides a starting point for setting up an image analysis system for quantitative densitometry and absorbance or fluorescence measurements in cell preparations, tissue sections or gels. Guidelines for instrumental settings that are essential for the valid application of image

  1. Review of Biomedical Image Processing

    Directory of Open Access Journals (Sweden)

    Ciaccio Edward J

    2011-11-01

    Full Text Available Abstract This article is a review of the book: 'Biomedical Image Processing', by Thomas M. Deserno, which is published by Springer-Verlag. Salient information that will be useful to decide whether the book is relevant to topics of interest to the reader, and whether it might be suitable as a course textbook, are presented in the review. This includes information about the book details, a summary, the suitability of the text in course and research work, the framework of the book, its specific content, and conclusions.

  2. Image processing with personal computer

    International Nuclear Information System (INIS)

    Hara, Hiroshi; Handa, Madoka; Watanabe, Yoshihiko

    1990-01-01

    The method of automating the judgement works using photographs in radiation nondestructive inspection with a simple type image processor on the market was examined. The software for defect extraction and making binary and the software for automatic judgement were made for trial, and by using the various photographs on which the judgement was already done as the object, the accuracy and the problematic points were tested. According to the state of the objects to be photographed and the condition of inspection, the accuracy of judgement from 100% to 45% was obtained. The criteria for judgement were in conformity with the collection of reference photographs made by Japan Cast Steel Association. In the non-destructive inspection by radiography, the number and size of the defect images in photographs are visually judged, the results are collated with the standard, and the quality is decided. Recently, the technology of image processing with personal computers advanced, therefore by utilizing this technology, the automation of the judgement of photographs was attempted to improve the accuracy, to increase the inspection efficiency and to realize labor saving. (K.I.)

  3. Infrared thermography quantitative image processing

    Science.gov (United States)

    Skouroliakou, A.; Kalatzis, I.; Kalyvas, N.; Grivas, TB

    2017-11-01

    Infrared thermography is an imaging technique that has the ability to provide a map of temperature distribution of an object’s surface. It is considered for a wide range of applications in medicine as well as in non-destructive testing procedures. One of its promising medical applications is in orthopaedics and diseases of the musculoskeletal system where temperature distribution of the body’s surface can contribute to the diagnosis and follow up of certain disorders. Although the thermographic image can give a fairly good visual estimation of distribution homogeneity and temperature pattern differences between two symmetric body parts, it is important to extract a quantitative measurement characterising temperature. Certain approaches use temperature of enantiomorphic anatomical points, or parameters extracted from a Region of Interest (ROI). A number of indices have been developed by researchers to that end. In this study a quantitative approach in thermographic image processing is attempted based on extracting different indices for symmetric ROIs on thermograms of the lower back area of scoliotic patients. The indices are based on first order statistical parameters describing temperature distribution. Analysis and comparison of these indices result in evaluating the temperature distribution pattern of the back trunk expected in healthy, regarding spinal problems, subjects.

  4. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  5. Radiology image orientation processing for workstation display

    Science.gov (United States)

    Chang, Chung-Fu; Hu, Kermit; Wilson, Dennis L.

    1998-06-01

    Radiology images are acquired electronically using phosphor plates that are read in Computed Radiology (CR) readers. An automated radiology image orientation processor (RIOP) for determining the orientation for chest images and for abdomen images has been devised. In addition, the chest images are differentiated as front (AP or PA) or side (Lateral). Using the processing scheme outlined, hospitals will improve the efficiency of quality assurance (QA) technicians who orient images and prepare the images for presentation to the radiologists.

  6. Digital Data Processing of Images

    African Journals Online (AJOL)

    be concerned with the image enhancement of scintigrams. Two applications of image ... obtained from scintigraphic equipment, image enhance- ment by computer was ... used as an example. ..... Using video-tape display, areas of interest are ...

  7. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  8. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    To demonstrate the importance of the image processing of fingerprint images prior to image enrolment or comparison, the set of fingerprint images in databases (a) and (b) of the FVC (Fingerprint Verification Competition) 2000 database were analyzed using a features extraction algorithm. This paper presents the results of ...

  9. Scilab and SIP for Image Processing

    OpenAIRE

    Fabbri, Ricardo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2012-01-01

    This paper is an overview of Image Processing and Analysis using Scilab, a free prototyping environment for numerical calculations similar to Matlab. We demonstrate the capabilities of SIP -- the Scilab Image Processing Toolbox -- which extends Scilab with many functions to read and write images in over 100 major file formats, including PNG, JPEG, BMP, and TIFF. It also provides routines for image filtering, edge detection, blurring, segmentation, shape analysis, and image recognition. Basic ...

  10. AN OVERVIEW OF PHARMACEUTICAL PROCESS VALIDATION AND PROCESS CONTROL VARIABLES OF TABLETS MANUFACTURING PROCESSES IN INDUSTRY

    OpenAIRE

    Mahesh B. Wazade*, Sheelpriya R. Walde and Abhay M. Ittadwar

    2012-01-01

    Validation is an integral part of quality assurance; the product quality is derived from careful attention to a number of factors including selection of quality parts and materials, adequate product and manufacturing process design, control of the process variables, in-process and end-product testing. Recently validation has become one of the pharmaceutical industry’s most recognized and discussed subjects. It is a critical success factor in product approval and ongoing commercialization, fac...

  11. Image processing technology for nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Beom; Kim, Woong Ki; Park, Soon Young

    1993-05-01

    Digital image processing technique is being actively studied since microprocessors and semiconductor memory devices have been developed in 1960's. Now image processing board for personal computer as well as image processing system for workstation is developed and widely applied to medical science, military, remote inspection, and nuclear industry. Image processing technology which provides computer system with vision ability not only recognizes nonobvious information but processes large information and therefore this technique is applied to various fields like remote measurement, object recognition and decision in adverse environment, and analysis of X-ray penetration image in nuclear facilities. In this report, various applications of image processing to nuclear facilities are examined, and image processing techniques are also analysed with the view of proposing the ideas for future applications. (Author)

  12. Nuclear medicine imaging and data processing

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1978-01-01

    The Oak Ridge Imaging System (ORIS) is a software operating system structure around the Digital Equipment Corporation's PDP-8 minicomputer which provides a complete range of image manipulation procedures. Through its modular design it remains open-ended for easy expansion to meet future needs. Already included in the system are image access routines for use with the rectilinear scanner or gamma camera (both static and flow studies); display hardware design and corresponding software; archival storage provisions; and, most important, many image processing techniques. The image processing capabilities include image defect removal, smoothing, nonlinear bounding, preparation of functional images, and transaxial emission tomography reconstruction from a limited number of views

  13. Eliminating "Hotspots" in Digital Image Processing

    Science.gov (United States)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  14. Image processing unit with fall-back.

    NARCIS (Netherlands)

    2011-01-01

    An image processing unit ( 100,200,300 ) for computing a sequence of output images on basis of a sequence of input images, comprises: a motion estimation unit ( 102 ) for computing a motion vector field on basis of the input images; a quality measurement unit ( 104 ) for computing a value of a

  15. Verification and Validation of a Fingerprint Image Registration Software

    Directory of Open Access Journals (Sweden)

    Liu Yan

    2006-01-01

    Full Text Available The need for reliable identification and authentication is driving the increased use of biometric devices and systems. Verification and validation techniques applicable to these systems are rather immature and ad hoc, yet the consequences of the wide deployment of biometric systems could be significant. In this paper we discuss an approach towards validation and reliability estimation of a fingerprint registration software. Our validation approach includes the following three steps: (a the validation of the source code with respect to the system requirements specification; (b the validation of the optimization algorithm, which is in the core of the registration system; and (c the automation of testing. Since the optimization algorithm is heuristic in nature, mathematical analysis and test results are used to estimate the reliability and perform failure analysis of the image registration module.

  16. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  17. Optoelectronic imaging of speckle using image processing method

    Science.gov (United States)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  18. Mammography image assessment; validity and reliability of current scheme

    International Nuclear Information System (INIS)

    Hill, C.; Robinson, L.

    2015-01-01

    Mammographers currently score their own images according to criteria set out by Regional Quality Assurance. The criteria used are based on the ‘Perfect, Good, Moderate, Inadequate’ (PGMI) marking criteria established by the National Health Service Breast Screening Programme (NHSBSP) in their Quality Assurance Guidelines of 2006 1 . This document discusses the validity and reliability of the current mammography image assessment scheme. Commencing with a critical review of the literature this document sets out to highlight problems with the national approach to the use of marking schemes. The findings suggest that ‘PGMI’ scheme is flawed in terms of reliability and validity and is not universally applied across the UK. There also appear to be differences in schemes used by trainees and qualified mammographers. Initial recommendations are to be made in collaboration with colleagues within the National Health Service Breast Screening Programme (NHSBSP), Higher Education Centres, College of Radiographers and the Royal College of Radiologists in order to identify a mammography image appraisal scheme that is fit for purpose. - Highlights: • Currently no robust evidence based marking tools in use for the assessment of images in mammography. • Is current system valid, reliable and robust? • How can the current image assessment tool be improved? • Should students and qualified mammographers use the same tool? • What marking criteria are available for image assessment?

  19. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika

    2009-01-01

    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  20. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.

    Science.gov (United States)

    Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D

    2010-08-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.

  1. Digital image processing techniques in archaeology

    Digital Repository Service at National Institute of Oceanography (India)

    Santanam, K.; Vaithiyanathan, R.; Tripati, S.

    Digital image processing involves the manipulation and interpretation of digital images with the aid of a computer. This form of remote sensing actually began in the 1960's with a limited number of researchers analysing multispectral scanner data...

  2. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  3. Enhancement of image contrast in linacgram through image processing

    International Nuclear Information System (INIS)

    Suh, Hyun Suk; Shin, Hyun Kyo; Lee, Re Na

    2000-01-01

    Conventional radiation therapy portal images gives low contrast images. The purpose of this study was to enhance image contrast of a linacgram by developing a low--cost image processing method. Chest linacgram was obtained by irradiating humanoid phantom and scanned using Diagnostic-Pro scanner for image processing. Several types of scan method were used in scanning. These include optical density scan, histogram equalized scan, linear histogram based scan, linear histogram independent scan, linear optical density scan, logarithmic scan, and power square root scan. The histogram distribution of the scanned images were plotted and the ranges of the gray scale were compared among various scan types. The scanned images were then transformed to the gray window by pallette fitting method and the contrast of the reprocessed portal images were evaluated for image improvement. Portal images of patients were also taken at various anatomic sites and the images were processed by Gray Scale Expansion (GSE) method. The patient images were analyzed to examine the feasibility of using the GSE technique in clinic. The histogram distribution showed that minimum and maximum gray scale ranges of 3192 and 21940 were obtained when the image was scanned using logarithmic method and square root method, respectively. Out of 256 gray scale, only 7 to 30% of the steps were used. After expanding the gray scale to full range, contrast of the portal images were improved. Experiment performed with patient image showed that improved identification of organs were achieved by GSE in portal images of knee joint, head and neck, lung, and pelvis. Phantom study demonstrated that the GSE technique improved image contrast of a linacgram. This indicates that the decrease in image quality resulting from the dual exposure, could be improved by expanding the gray scale. As a result, the improved technique will make it possible to compare the digitally reconstructed radiographs (DRR) and simulation image for

  4. Image processing for medical diagnosis using CNN

    International Nuclear Information System (INIS)

    Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

    2003-01-01

    Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images

  5. Image processing in diabetic related causes

    CERN Document Server

    Kumar, Amit

    2016-01-01

    This book is a collection of all the experimental results and analysis carried out on medical images of diabetic related causes. The experimental investigations have been carried out on images starting from very basic image processing techniques such as image enhancement to sophisticated image segmentation methods. This book is intended to create an awareness on diabetes and its related causes and image processing methods used to detect and forecast in a very simple way. This book is useful to researchers, Engineers, Medical Doctors and Bioinformatics researchers.

  6. Validation of the Vanderbilt Holistic Face Processing Test

    OpenAIRE

    Wang, Chao-Chih; Ross, David A.; Gauthier, Isabel; Richler, Jennifer J.

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the ...

  7. Validation of the Vanderbilt Holistic Face Processing Test.

    OpenAIRE

    Chao-Chih Wang; Chao-Chih Wang; David Andrew Ross; Isabel Gauthier; Jennifer Joanna Richler

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the ...

  8. Organization of bubble chamber image processing

    International Nuclear Information System (INIS)

    Gritsaenko, I.A.; Petrovykh, L.P.; Petrovykh, Yu.L.; Fenyuk, A.B.

    1985-01-01

    A programme of bubble chamber image processing is described. The programme is written in FORTRAN, it is developed for the DEC-10 computer and is designed for operation of semi-automation processing-measurement projects PUOS-2 and PUOS-4. Fornalization of the image processing permits to use it for different physical experiments

  9. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    Science.gov (United States)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  10. An Applied Image Processing for Radiographic Testing

    International Nuclear Information System (INIS)

    Ratchason, Surasak; Tuammee, Sopida; Srisroal Anusara

    2005-10-01

    An applied image processing for radiographic testing (RT) is desirable because it decreases time-consuming, decreases the cost of inspection process that need the experienced workers, and improves the inspection quality. This paper presents the primary study of image processing for RT-films that is the welding-film. The proposed approach to determine the defects on weld-images. The BMP image-files are opened and developed by computer program that using Borland C ++ . The software has five main methods that are Histogram, Contrast Enhancement, Edge Detection, Image Segmentation and Image Restoration. Each the main method has the several sub method that are the selected options. The results showed that the effective software can detect defects and the varied method suit for the different radiographic images. Furthermore, improving images are better when two methods are incorporated

  11. Image processing on the image with pixel noise bits removed

    Science.gov (United States)

    Chuang, Keh-Shih; Wu, Christine

    1992-06-01

    Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.

  12. Matching rendered and real world images by digital image processing

    Science.gov (United States)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  13. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang

    2014-01-01

    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  14. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  15. Radiochemical verification and validation in the environmental data collection process

    International Nuclear Information System (INIS)

    Rosano-Reece, D.; Bottrell, D.; Bath, R.J.

    1994-01-01

    A credible and cost effective environmental data collection process should produce analytical data which meets regulatory and program specific requirements. Analytical data, which support the sampling and analysis activities at hazardous waste sites, undergo verification and independent validation before the data are submitted to regulators. Understanding the difference between verification and validation and their respective roles in the sampling and analysis process is critical to the effectiveness of a program. Verification is deciding whether the measurement data obtained are what was requested. The verification process determines whether all the requirements were met. Validation is more complicated than verification. It attempts to assess the impacts on data use, especially when requirements are not met. Validation becomes part of the decision-making process. Radiochemical data consists of a sample result with an associated error. Therefore, radiochemical validation is different and more quantitative than is currently possible for the validation of hazardous chemical data. Radiochemical data include both results and uncertainty that can be statistically compared to identify significance of differences in a more technically defensible manner. Radiochemical validation makes decisions about analyte identification, detection, and uncertainty for a batch of data. The process focuses on the variability of the data in the context of the decision to be made. The objectives of this paper are to present radiochemical verification and validation for environmental data and to distinguish the differences between the two operations

  16. Image processing in nondestructive testing

    International Nuclear Information System (INIS)

    Janney, D.H.

    1976-01-01

    In those applications where the principal desire is for higher throughput, the problem often becomes one of automatic feature extraction and mensuration. Classically these problems can be approached by means of either an optical image processor or an analysis in the digital computer. Optical methods have the advantages of low cost and very high speed, but are often inflexible and are sometimes very difficult to implement due to practical problems. Computerized methods can be very flexible, they can use very powerful mathematical techniques, but usually are difficult to implement for very high throughput. Recent technological developments in microprocessors and in electronic analog image analyzers may furnish the key to resolving the shortcomings of the two classical methods of image analysis

  17. How Digital Image Processing Became Really Easy

    Science.gov (United States)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  18. Quantitative image processing in fluid mechanics

    Science.gov (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  19. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance... and spatial co-ordinates into discrete components. The mathematical concepts involved are the sampling and transform theory. Two dimensional transforms are used for image enhancement, restoration, encoding and description too. The main objective of the image...

  20. Intelligent medical image processing by simulated annealing

    International Nuclear Information System (INIS)

    Ohyama, Nagaaki

    1992-01-01

    Image processing is being widely used in the medical field and already has become very important, especially when used for image reconstruction purposes. In this paper, it is shown that image processing can be classified into 4 categories; passive, active, intelligent and visual image processing. These 4 classes are explained at first through the use of several examples. The results show that the passive image processing does not give better results than the others. Intelligent image processing, then, is addressed, and the simulated annealing method is introduced. Due to the flexibility of the simulated annealing, formulated intelligence is shown to be easily introduced in an image reconstruction problem. As a practical example, 3D blood vessel reconstruction from a small number of projections, which is insufficient for conventional method to give good reconstruction, is proposed, and computer simulation clearly shows the effectiveness of simulated annealing method. Prior to the conclusion, medical file systems such as IS and C (Image Save and Carry) is pointed out to have potential for formulating knowledge, which is indispensable for intelligent image processing. This paper concludes by summarizing the advantages of simulated annealing. (author)

  1. Image Biomarkers and Precision Medicine: need for validation

    International Nuclear Information System (INIS)

    Marti-Bonmati, L.; Alberich-Bayarri, A.; Garcia Castro, F.

    2016-01-01

    Personalized medicine aims to improve the diagnosis, classification and the best treatment for a particular patient. Today, radiologists are challenged to translate new biological discoveries, the different mechanisms of disease and advances in preclinical research, into a clinical reality through patients, images and their associated parameters. In this article we show how digital medical imaging and computational data processing extract numerous quantitative parameters from the obtained images as virtual biopsies. To be implemented in clinical practice, biomarkers should provide useful and relevant information, improving processes diagnostic, therapeutic and monitoring, for the benefit of patients. (Author)

  2. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  3. Stable image acquisition for mobile image processing applications

    Science.gov (United States)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  4. Cellular automata in image processing and geometry

    CERN Document Server

    Adamatzky, Andrew; Sun, Xianfang

    2014-01-01

    The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...

  5. ARTIP: Automated Radio Telescope Image Processing Pipeline

    Science.gov (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  6. On some applications of diffusion processes for image processing

    International Nuclear Information System (INIS)

    Morfu, S.

    2009-01-01

    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  7. Microsoft Visio 2013 business process diagramming and validation

    CERN Document Server

    Parker, David

    2013-01-01

    Microsoft Visio 2013 Business Process Diagramming and Validation provides a comprehensive and practical tutorial including example code and demonstrations for creating validation rules, writing ShapeSheet formulae, and much more.If you are a Microsoft Visio 2013 Professional Edition power user or developer who wants to get to grips with both the essential features of Visio 2013 and the validation rules in this edition, then this book is for you. A working knowledge of Microsoft Visio and optionally .NET for the add-on code is required, though previous knowledge of business process diagramming

  8. The Groningen image processing system

    International Nuclear Information System (INIS)

    Allen, R.J.; Ekers, R.D.; Terlouw, J.P.

    1985-01-01

    This paper describes an interactive, integrated software and hardware computer system for the reduction and analysis of astronomical images. A short historical introduction is presented before some examples of the astonomical data currently handled by the system are shown. A description is given of the present hardware and software structure. The system is illustrated by describing its appearance to the user, to the applications programmer, and to the system manager. Some quantitative information on the size and cost of the system is given, and its good and bad features are discussed

  9. Process perspective on image quality evaluation

    Science.gov (United States)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  10. Multispectral image enhancement processing for microsat-borne imager

    Science.gov (United States)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  11. Imaging process and VIP engagement

    Directory of Open Access Journals (Sweden)

    Starčević Slađana

    2007-01-01

    Full Text Available It's often quoted that celebrity endorsement advertising has been recognized as "an ubiquitous feature of the modern marketing". The researches have shown that this kind of engagement has been producing significantly more favorable reactions of consumers, that is, a higher level of an attention for the advertising messages, a better recall of the message and a brand name, more favorable evaluation and purchasing intentions of the brand, in regard to engagement of the non-celebrity endorsers. A positive influence on a firm's profitability and prices of stocks has also been shown. Therefore marketers leaded by the belief that celebrities represent the effective ambassadors in building of positive brand image or company image and influence an improvement of the competitive position, invest enormous amounts of money for signing the contracts with them. However, this strategy doesn't guarantee success in any case, because it's necessary to take into account many factors. This paper summarizes the results of previous researches in this field and also the recommendations for a more effective use of this kind of advertising.

  12. Design for embedded image processing on FPGAs

    CERN Document Server

    Bailey, Donald G

    2011-01-01

    "Introductory material will consider the problem of embedded image processing, and how some of the issues may be solved using parallel hardware solutions. Field programmable gate arrays (FPGAs) are introduced as a technology that provides flexible, fine-grained hardware that can readily exploit parallelism within many image processing algorithms. A brief review of FPGA programming languages provides the link between a software mindset normally associated with image processing algorithms, and the hardware mindset required for efficient utilization of a parallel hardware design. The bulk of the book will focus on the design process, and in particular how designing an FPGA implementation differs from a conventional software implementation. Particular attention is given to the techniques for mapping an algorithm onto an FPGA implementation, considering timing, memory bandwidth and resource constraints, and efficient hardware computational techniques. Extensive coverage will be given of a range of image processing...

  13. Crack Length Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    1990-01-01

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better then that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  14. Crack Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal, Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better than that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  15. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  16. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  17. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  18. MR imaging of abnormal synovial processes

    International Nuclear Information System (INIS)

    Quinn, S.F.; Sanchez, R.; Murray, W.T.; Silbiger, M.L.; Ogden, J.; Cochran, C.

    1987-01-01

    MR imaging can directly image abnormal synovium. The authors reviewed over 50 cases with abnormal synovial processes. The abnormalities include Baker cysts, semimembranous bursitis, chronic shoulder bursitis, peroneal tendon ganglion cyst, periarticular abscesses, thickened synovium from rheumatoid and septic arthritis, and synovial hypertrophy secondary to Legg-Calve-Perthes disease. MR imaging has proved invaluable in identifying abnormal synovium, defining the extent and, to a limited degree, characterizing its makeup

  19. Earth Observation Services (Image Processing Software)

    Science.gov (United States)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  20. Morphology and probability in image processing

    International Nuclear Information System (INIS)

    Fabbri, A.G.

    1985-01-01

    The author presents an analysis of some concepts which relate morphological attributes of digital objects to statistically meaningful measures. Some elementary transformations of binary images are described and examples of applications are drawn from the geological and image analysis domains. Some of the morphological models applicablle in astronomy are discussed. It is shown that the development of new spatially oriented computers leads to more extensive applications of image processing in the geosciences

  1. Image processing with a cellular nonlinear network

    International Nuclear Information System (INIS)

    Morfu, S.

    2005-01-01

    A cellular nonlinear network (CNN) based on uncoupled nonlinear oscillators is proposed for image processing purposes. It is shown theoretically and numerically that the contrast of an image loaded at the nodes of the CNN is strongly enhanced, even if this one is initially weak. An image inversion can be also obtained without reconfiguration of the network whereas a gray levels extraction can be performed with an additional threshold filtering. Lastly, an electronic implementation of this CNN is presented

  2. Selections from 2017: Image Processing with AstroImageJ

    Science.gov (United States)

    Kohler, Susanna

    2017-12-01

    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry

  3. Japanese version of cutaneous body image scale: translation and validation.

    Science.gov (United States)

    Higaki, Yuko; Watanabe, Ikuko; Masaki, Tomoko; Kamo, Toshiko; Kawashima, Makoto; Satoh, Toshihiko; Saitoh, Shiroh; Nohara, Michiko; Gupta, Madhulika A

    2009-09-01

    Cutaneous body image, defined as the individual's mental perception of the appearance of their skin, hair and nails, is an important psychodermatological element in skin diseases. To measure individuals' cutaneous body image, a practical and accurate instrument is necessary. In this study, we translated the Cutaneous Body Image Scale (CBIS), a 7-item instrument originally created by Gupta et al. in 2004, into Japanese using a forward- and back-translation method and evaluated the reliability and validity of the instrument by psychometric tests. A total of 298 healthy adults (64 men and 234 women, aged 28.9 +/- 9.9 years) and 165 dermatology patients (56.7% eczema/dermatitis, 9.8% acne, 7.5% alopecia, 6.9% psoriasis, 19.1% skin tumor/fleck/other) (30 men and 135 women, aged 37.9 +/- 15.2 years) responded to the Japanese version of the CBIS. The internal-consistency reliability of the instrument was high (Cronbach's alpha, healthy adults 0.88, patients 0.84). The CBIS measure demonstrates good test-retest reliability (healthy adults gamma = 0.92, P emotions" and "global" scores of Skindex-16 in healthy adults (gamma = -0.397 and -0.373, respectively) and in patients (gamma = -0.431 and -0.38, respectively). A stepwise multiple regression analysis revealed that an emotional aspect of skin-condition related quality of life was the best predictor of cutaneous body image in both healthy adults and patients (beta = -0.31 and -0.41, respectively) followed by "body dissatisfaction" (beta = -0.17, and -0.23, respectively). Adjusted R(2) was 0.246 in healthy adults and 0.264 in patients. These were consistent with the results from the original the CBIS. These results suggest that the Japanese version of the CBIS is a reliable and valid instrument to measure the cutaneous body image of Japanese adults and also dermatology patients.

  4. Automated measurement of pressure injury through image processing.

    Science.gov (United States)

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure

  5. A gamma cammera image processing system

    International Nuclear Information System (INIS)

    Chen Weihua; Mei Jufang; Jiang Wenchuan; Guo Zhenxiang

    1987-01-01

    A microcomputer based gamma camera image processing system has been introduced. Comparing with other systems, the feature of this system is that an inexpensive microcomputer has been combined with specially developed hardware, such as, data acquisition controller, data processor and dynamic display controller, ect. Thus the process of picture processing has been speeded up and the function expense ratio of the system raised

  6. Mapping spatial patterns with morphological image processing

    Science.gov (United States)

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham

    2006-01-01

    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  7. Software Process Validation: Quantitatively Measuring the Correspondence of a Process to a Model

    National Research Council Canada - National Science Library

    Cook, Jonathan E; Wolf, Alexander L

    1997-01-01

    .... When process models and process executions diverge, something significant is happening. The authors have developed techniques for uncovering and measuring the discrepancies between models and executions, which they call process validation...

  8. Digital image processing in art conservation

    Czech Academy of Sciences Publication Activity Database

    Zitová, Barbara; Flusser, Jan

    č. 53 (2003), s. 44-45 ISSN 0926-4981 Institutional research plan: CEZ:AV0Z1075907 Keywords : art conservation * digital image processing * change detection Subject RIV: JD - Computer Applications, Robotics

  9. Dictionary of computer vision and image processing

    National Research Council Canada - National Science Library

    Fisher, R. B

    2014-01-01

    ... been identified for inclusion since the current edition was published. Revised to include an additional 1000 new terms to reflect current updates, which includes a significantly increased focus on image processing terms, as well as machine learning terms...

  10. Advanced Secure Optical Image Processing for Communications

    Science.gov (United States)

    Al Falou, Ayman

    2018-04-01

    New image processing tools and data-processing network systems have considerably increased the volume of transmitted information such as 2D and 3D images with high resolution. Thus, more complex networks and long processing times become necessary, and high image quality and transmission speeds are requested for an increasing number of applications. To satisfy these two requests, several either numerical or optical solutions were offered separately. This book explores both alternatives and describes research works that are converging towards optical/numerical hybrid solutions for high volume signal and image processing and transmission. Without being limited to hybrid approaches, the latter are particularly investigated in this book in the purpose of combining the advantages of both techniques. Additionally, pure numerical or optical solutions are also considered since they emphasize the advantages of one of the two approaches separately.

  11. Imaging partons in exclusive scattering processes

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus

    2012-06-15

    The spatial distribution of partons in the proton can be probed in suitable exclusive scattering processes. I report on recent performance estimates for parton imaging at a proposed Electron-Ion Collider.

  12. Fragmentation measurement using image processing

    Directory of Open Access Journals (Sweden)

    Farhang Sereshki

    2016-12-01

    Full Text Available In this research, first of all, the existing problems in fragmentation measurement are reviewed for the sake of its fast and reliable evaluation. Then, the available methods used for evaluation of blast results are mentioned. The produced errors especially in recognizing the rock fragments in computer-aided methods, and also, the importance of determination of their sizes in the image analysis methods are described. After reviewing the previous work done, an algorithm is proposed for the automated determination of rock particles’ boundary in the Matlab software. This method can determinate automatically the particles boundary in the minimum time. The results of proposed method are compared with those of Split Desktop and GoldSize software in two automated and manual states. Comparing the curves extracted from different methods reveals that the proposed approach is accurately applicable in measuring the size distribution of laboratory samples, while the manual determination of boundaries in the conventional software is very time-consuming, and the results of automated netting of fragments are very different with the real value due to the error in separation of the objects.

  13. On Processing Hexagonally Sampled Images

    Science.gov (United States)

    2011-07-01

    A. Approved for public release, distribution unlimited. (96ABW-2011-0325) Neuromorphic Infrared Sensor (NIFS) 31 DISTRIBUTION A. Approved...J ••• • Drawn chip size Focal plane size Focal plane resolution Pixel type Pixel pit ch Post -pixel circuitry Interface Process Chip ...analog out 12-bit command bus in two 6-bit words 8-bit digital out Optional 3 input chip select Optional analog out Alternat ive 12 bit input

  14. Processing of space images and geologic interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Yudin, V S

    1981-01-01

    Using data for standard sections, a correlation was established between natural formations in geologic/geophysical dimensions and the form they take in the imaging. With computer processing, important data can be derived from the image. Use of the above correlations has allowed to make a number of preliminary classifications of tectonic structures, and to determine certain ongoing processes in the given section. The derived data may be used for search of useful minerals.

  15. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo

    2014-07-01

    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  16. Early skin tumor detection from microscopic images through image processing

    International Nuclear Information System (INIS)

    Siddiqi, A.A.; Narejo, G.B.; Khan, A.M.

    2017-01-01

    The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing) allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface) that is generated for the algorithm makes the system user friendly. (author)

  17. Development, validation and routine control of a radiation process

    International Nuclear Information System (INIS)

    Kishor Mehta

    2010-01-01

    Today, radiation is used in industrial processing for variety of applications; from low doses for blood irradiation to very high doses for materials modification and even higher for gemstone colour enhancement. At present, radiation is mainly provided by either radionuclides or machine sources; cobalt-60 is the most predominant radionuclide in use. Currently, there are several hundred irradiation facilities worldwide. Similar to other industries, quality management systems can assist radiation processing facilities in enhancing customer satisfaction and maintaining and improving product quality. To help fulfill quality management requirements, several national and international organizations have developed various standards related to radiation processing. They all have requirements and guidelines for development, validation and routine control of the radiation process. For radiation processing, these three phases involve the following activities. Development phase includes selecting the type of radiation source, irradiation facility and the dose required for the process. Validation phase includes conducting activities that give assurance that the process will be successful. Routine control then involves activities that provide evidence that the process has been successfully realized. These standards require documentary evidence that process validation and process control have been followed. Dosimetry information gathered during these processes provides this evidence. (authors)

  18. Some computer applications and digital image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Lowinger, T.

    1981-01-01

    Methods of digital image processing are applied to problems in nuclear medicine imaging. The symmetry properties of central nervous system lesions are exploited in an attempt to determine the three-dimensional radioisotope density distribution within the lesions. An algorithm developed by astronomers at the end of the 19th century to determine the distribution of matter in globular clusters is applied to tumors. This algorithm permits the emission-computed-tomographic reconstruction of spherical lesions from a single view. The three-dimensional radioisotope distribution derived by the application of the algorithm can be used to characterize the lesions. The applicability to nuclear medicine images of ten edge detection methods in general usage in digital image processing were evaluated. A general model of image formation by scintillation cameras is developed. The model assumes that objects to be imaged are composed of a finite set of points. The validity of the model has been verified by its ability to duplicate experimental results. Practical applications of this work involve quantitative assessment of the distribution of radipharmaceuticals under clinical situations and the study of image processing algorithms

  19. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  20. Rotation Covariant Image Processing for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Henrik Skibbe

    2013-01-01

    Full Text Available With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  1. Image processing for HTS SQUID probe microscope

    International Nuclear Information System (INIS)

    Hayashi, T.; Koetitz, R.; Itozaki, H.; Ishikawa, T.; Kawabe, U.

    2005-01-01

    An HTS SQUID probe microscope has been developed using a high-permeability needle to enable high spatial resolution measurement of samples in air even at room temperature. Image processing techniques have also been developed to improve the magnetic field images obtained from the microscope. Artifacts in the data occur due to electromagnetic interference from electric power lines, line drift and flux trapping. The electromagnetic interference could successfully be removed by eliminating the noise peaks from the power spectrum of fast Fourier transforms of line scans of the image. The drift between lines was removed by interpolating the mean field value of each scan line. Artifacts in line scans occurring due to flux trapping or unexpected noise were removed by the detection of a sharp drift and interpolation using the line data of neighboring lines. Highly detailed magnetic field images were obtained from the HTS SQUID probe microscope by the application of these image processing techniques

  2. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.

    2018-01-09

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  3. Brain's tumor image processing using shearlet transform

    Science.gov (United States)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander

    2017-09-01

    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  4. SPECIFICITY OF MANIFACTURING PROCESS VALIDATION FOR DIAGNOSTIC SEROLOGICAL DEVICES

    Directory of Open Access Journals (Sweden)

    O. Yu. Galkin

    2018-02-01

    Full Text Available The aim of this research was to analyze recent scientific literature, as well as national and international legislature on manifacturing process validation of biopharmaceutical production, in particular devices for serological diagnostics. Technology validation in the field of medical devices for serological diagnostics is most influenced by the Technical Regulation for Medical Devices for in vitro Diagnostics State Standards of Ukraine – SSU EN ISO 13485:2015 “Medical devices. Quality management system. Requirements for regulation”, SSU EN ISO 14971:2015 “Medical devices. Instructions for risk management”, Instruction ST-N of the Ministry of Healthcare of Ukraine 42-4.0:2014 “Medications. Suitable industrial practice”, State Pharmacopoeia of Ukraine and Instruction ICH Q9 on risk management. Current recommendations for validations of drugs manufacturing process, including biotechnological manufacturing, can not be directly applied to medical devices for in vitro diagnostics. It was shown that the specifics of application and raw materials require individual validation parameters and process validations for serological diagnostics devices. Critical parameters to consider in validation plans were provided for every typical stage of production of in vitro diagnostics devices on the example of immunoassay kits, such as obtaining protein antigens, including recombinant ones, preparations of mono- and polyclonal antibodies, immunoenzyme conjugates and immunosorbents, chemical reagents etc. The bottlenecks of technologies for in vitro diagnostics devices were analyzed from the bioethical and biosafety points of view.

  5. Validation of the Vanderbilt Holistic Face Processing Test.

    Science.gov (United States)

    Wang, Chao-Chih; Ross, David A; Gauthier, Isabel; Richler, Jennifer J

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.

  6. Validation of the Vanderbilt Holistic Face Processing Test.

    Directory of Open Access Journals (Sweden)

    Chao-Chih Wang

    2016-11-01

    Full Text Available The Vanderbilt Holistic Face Processing Test (VHPT-F is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014. In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom, which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.

  7. Toward valid and reliable brain imaging results in eating disorders.

    Science.gov (United States)

    Frank, Guido K W; Favaro, Angela; Marsh, Rachel; Ehrlich, Stefan; Lawson, Elizabeth A

    2018-03-01

    Human brain imaging can help improve our understanding of mechanisms underlying brain function and how they drive behavior in health and disease. Such knowledge may eventually help us to devise better treatments for psychiatric disorders. However, the brain imaging literature in psychiatry and especially eating disorders has been inconsistent, and studies are often difficult to replicate. The extent or severity of extremes of eating and state of illness, which are often associated with differences in, for instance hormonal status, comorbidity, and medication use, commonly differ between studies and likely add to variation across study results. Those effects are in addition to the well-described problems arising from differences in task designs, data quality control procedures, image data preprocessing and analysis or statistical thresholds applied across studies. Which of those factors are most relevant to improve reproducibility is still a question for debate and further research. Here we propose guidelines for brain imaging research in eating disorders to acquire valid results that are more reliable and clinically useful. © 2018 Wiley Periodicals, Inc.

  8. Architecture Of High Speed Image Processing System

    Science.gov (United States)

    Konishi, Toshio; Hayashi, Hiroshi; Ohki, Tohru

    1988-01-01

    One of architectures for a high speed image processing system which corresponds to a new algorithm for a shape understanding is proposed. And the hardware system which is based on the archtecture was developed. Consideration points of the architecture are mainly that using processors should match with the processing sequence of the target image and that the developed system should be used practically in an industry. As the result, it was possible to perform each processing at a speed of 80 nano-seconds a pixel.

  9. JIP: Java image processing on the Internet

    Science.gov (United States)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  10. Image exploitation and dissemination prototype of distributed image processing

    International Nuclear Information System (INIS)

    Batool, N.; Huqqani, A.A.; Mahmood, A.

    2003-05-01

    Image processing applications requirements can be best met by using the distributed environment. This report presents to draw inferences by utilizing the existed LAN resources under the distributed computing environment using Java and web technology for extensive processing to make it truly system independent. Although the environment has been tested using image processing applications, its design and architecture is truly general and modular so that it can be used for other applications as well, which require distributed processing. Images originating from server are fed to the workers along with the desired operations to be performed on them. The Server distributes the task among the Workers who carry out the required operations and send back the results. This application has been implemented using the Remote Method Invocation (RMl) feature of Java. Java RMI allows an object running in one Java Virtual Machine (JVM) to invoke methods on another JVM thus providing remote communication between programs written in the Java programming language. RMI can therefore be used to develop distributed applications [1]. We undertook this project to gain a better understanding of distributed systems concepts and its uses for resource hungry jobs. The image processing application is developed under this environment

  11. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Document Examination: Applications of Image Processing Systems.

    Science.gov (United States)

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  13. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  14. Fundamental concepts of digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  15. Fundamental Concepts of Digital Image Processing

    Science.gov (United States)

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  16. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  17. Functional validation and comparison framework for EIT lung imaging.

    Science.gov (United States)

    Grychtol, Bartłomiej; Elke, Gunnar; Meybohm, Patrick; Weiler, Norbert; Frerichs, Inéz; Adler, Andy

    2014-01-01

    Electrical impedance tomography (EIT) is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen). In this way, large and discrete shifts in global and regional lung air content were elicited. We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm), GREIT (a more recent consensus algorithm for lung imaging), truncated singular value decomposition (TSVD), several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV) reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data. Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT.

  18. Image processing of angiograms: A pilot study

    Science.gov (United States)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  19. PCB Fault Detection Using Image Processing

    Science.gov (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.

    2017-08-01

    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  20. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  1. Materials of the Regional Training Course on Validation and Process Control for Electron Beam Radiation Processing

    International Nuclear Information System (INIS)

    Kaluska, I.; Gluszewski, W.

    2007-01-01

    Irradiation with electron beams is used in the polymer industry, food, pharmaceutical and medical device industries for sterilization of surfaces. About 20 lectures presented during the Course were devoted to all aspects of control and validation of low energy electron beam processes. They should help the product manufacturers better understand the application of the ANSI/AAMI/ISO 11137 norm, which defines the requirements and standard practices for validation of the irradiation process and the process controls required during routine processing

  2. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  3. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    NARCIS (Netherlands)

    Wognum, S.; Heethuis, S. E.; Rosario, T.; Hoogeman, M. S.; Bel, A.

    2014-01-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations.

  4. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  5. Penn State astronomical image processing system

    International Nuclear Information System (INIS)

    Truax, R.J.; Nousek, J.A.; Feigelson, E.D.; Lonsdale, C.J.

    1987-01-01

    The needs of modern astronomy for image processing set demanding standards in simultaneously requiring fast computation speed, high-quality graphic display, large data storage, and interactive response. An innovative image processing system was designed, integrated, and used; it is based on a supermicro architecture which is tailored specifically for astronomy, which provides a highly cost-effective alternative to the traditional minicomputer installation. The paper describes the design rationale, equipment selection, and software developed to allow other astronomers with similar needs to benefit from the present experience. 9 references

  6. Validation of image cytometry for sperm concentration measurement

    DEFF Research Database (Denmark)

    Egeberg Palme, Dorte L.; Johannsen, Trine Holm; Petersen, Jørgen Holm

    2017-01-01

    Sperm concentration is an essential parameter in the diagnostic evaluation of men from infertile couples. It is usually determined by manual counting using a hemocytometer, and is therefore both laborious and subjective. We have earlier shown that a newly developed image cytometry (IC) method may...... be used to determine sperm concentration. Here we present a validation of the IC method by analysis of 4010 semen samples. There was high agreement between IC and manual counting at sperm concentrations above 3 mill/ml and in samples with concentrations above 12 mill/ml the two methods can be used...... a lower coefficient of variation than the manual method (5% vs 10%), indicating a better precision of the IC method. In conclusion, measurement of sperm concentration by IC can be used at concentrations above 3 mill/ml and seems more accurate and precise than manual counting, making it an attractive...

  7. Image processing of early gastric cancer cases

    International Nuclear Information System (INIS)

    Inamoto, Kazuo; Umeda, Tokuo; Inamura, Kiyonari

    1992-01-01

    Computer image processing was used to enhance gastric lesions in order to improve the detection of stomach cancer. Digitization was performed in 25 cases of early gastric cancer that had been confirmed surgically and pathologically. The image processing consisted of grey scale transformation, edge enhancement (Sobel operator), and high-pass filtering (unsharp masking). Grey scale transformation improved image quality for the detection of gastric lesions. The Sobel operator enhanced linear and curved margins, and consequently, suppressed the rest. High-pass filtering with unsharp masking was superior to visualization of the texture pattern on the mucosa. Eight of 10 small lesions (less than 2.0 cm) were successfully demonstrated. However, the detection of two lesions in the antrum, was difficult even with the aid of image enhancement. In the other 15 lesions (more than 2.0 cm), the tumor surface pattern and margin between the tumor and non-pathological mucosa were clearly visualized. Image processing was considered to contribute to the detection of small early gastric cancer lesions by enhancing the pathological lesions. (author)

  8. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  9. A process improvement model for software verification and validation

    Science.gov (United States)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  10. Conceptualization, Cognitive Process between Image and Word

    Directory of Open Access Journals (Sweden)

    Aurel Ion Clinciu

    2009-12-01

    Full Text Available The study explores the process of constituting and organizing the system of concepts. After a comparative analysis of image and concept, conceptualization is reconsidered through raising for discussion the relations of concept with image in general and with self-image mirrored in body schema in particular. Taking into consideration the notion of mental space, there is developed an articulated perspective on conceptualization which has the images of mental space at one pole and the categories of language and operations of thinking at the other pole. There are explored the explicative possibilities of the notion of Tversky’s diagrammatic space as an element which is necessary to understand the genesis of graphic behaviour and to define a new construct, graphic intelligence.

  11. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  12. Intensity-dependent point spread image processing

    International Nuclear Information System (INIS)

    Cornsweet, T.N.; Yellott, J.I.

    1984-01-01

    There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction

  13. Image processing in radiology. Current applications

    International Nuclear Information System (INIS)

    Neri, E.; Caramella, D.; Bartolozzi, C.

    2008-01-01

    Few fields have witnessed such impressive advances as image processing in radiology. The progress achieved has revolutionized diagnosis and greatly facilitated treatment selection and accurate planning of procedures. This book, written by leading experts from many countries, provides a comprehensive and up-to-date description of how to use 2D and 3D processing tools in clinical radiology. The first section covers a wide range of technical aspects in an informative way. This is followed by the main section, in which the principal clinical applications are described and discussed in depth. To complete the picture, a third section focuses on various special topics. The book will be invaluable to radiologists of any subspecialty who work with CT and MRI and would like to exploit the advantages of image processing techniques. It also addresses the needs of radiographers who cooperate with clinical radiologists and should improve their ability to generate the appropriate 2D and 3D processing. (orig.)

  14. Post-processing of digital images.

    Science.gov (United States)

    Perrone, Luca; Politi, Marco; Foschi, Raffaella; Masini, Valentina; Reale, Francesca; Costantini, Alessandro Maria; Marano, Pasquale

    2003-01-01

    Post-processing of bi- and three-dimensional images plays a major role for clinicians and surgeons in both diagnosis and therapy. The new spiral (single and multislice) CT and MRI machines have allowed better quality of images. With the associated development of hardware and software, post-processing has become indispensable in many radiologic applications in order to address precise clinical questions. In particular, in CT the acquisition technique is fundamental and should be targeted and optimized to obtain good image reconstruction. Multiplanar reconstructions ensure simple, immediate display of sections along different planes. Three-dimensional reconstructions include numerous procedures: multiplanar techniques as maximum intensity projections (MIP); surface rendering techniques as the Shaded Surface Display (SSD); volume techniques as the Volume Rendering Technique; techniques of virtual endoscopy. In surgery computer-aided techniques as the neuronavigator, which with information provided by neuroimaging helps the neurosurgeon in simulating and performing the operation, are extremely interesting.

  15. Speckle pattern processing by digital image correlation

    Directory of Open Access Journals (Sweden)

    Gubarev Fedor

    2016-01-01

    Full Text Available Testing the method of speckle pattern processing based on the digital image correlation is carried out in the current work. Three the most widely used formulas of the correlation coefficient are tested. To determine the accuracy of the speckle pattern processing, test speckle patterns with known displacement are used. The optimal size of a speckle pattern template used for determination of correlation and corresponding the speckle pattern displacement is also considered in the work.

  16. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various samples. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  17. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various materials. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  18. Optimisation in signal and image processing

    CERN Document Server

    Siarry, Patrick

    2010-01-01

    This book describes the optimization methods most commonly encountered in signal and image processing: artificial evolution and Parisian approach; wavelets and fractals; information criteria; training and quadratic programming; Bayesian formalism; probabilistic modeling; Markovian approach; hidden Markov models; and metaheuristics (genetic algorithms, ant colony algorithms, cross-entropy, particle swarm optimization, estimation of distribution algorithms, and artificial immune systems).

  19. Process for making lyophilized radiographic imaging kit

    International Nuclear Information System (INIS)

    Grogg, T.W.; Bates, P.E.; Bugaj, J.E.

    1985-01-01

    A process for making a lyophilized composition useful for skeletal imaging whereby an aqueous solution containing an ascorbate, gentisate, or reductate stabilizer is contacted with tin metal or an alloy containing tin and, thereafter, lyophilized. Preferably, such compositions also comprise a tissue-specific carrier and a stannous compound. It is particularly preferred to incorporate stannous oxide as a coating on the tin metal

  20. Validity of medium-energy collimator for sentinel lymphoscintigraphy imaging

    International Nuclear Information System (INIS)

    Tsushima, Hiroyuki; Yamanaga, Takashi; Shimonishi, Yoshihiro; Kosakai, Kazuhisa; Takayama, Teruhiko; Kizu, Hiroto; Noguchi, Atsushi; Onoguchi, Masahisa

    2007-01-01

    For lymphoscintigraphy to detect sentinel lymph node (SLN) in the breast cancer, the lead shielding of the injection site is often used to avoid artifacts, but the method tends to cover the neighborhood SLN. To exclude this defect, authors developed ME (medium-energy) method where ME collimator and energy setting shifted to its higher region were employed. This paper described the development and validity evaluation of the ME method. Performed were examinations with 3 acrylic phantoms of the injection site (IS), LN and combination of IS+LN (CB): IS was a cylinder, containing 40 MBq of 99m Tc-pertechnetate and LN, a plate with 30 sealed holes having 0.78-400 kBq. CB phantom consisted from LN-simulating holes (each, 40 kBq) placed linearly around the center of IS in H and S directions. Imaging was conducted with 2 kinds of 2-detector gamma camera, FORTE (ADAGA) and DSX rectangular (Sopha Medical Corp.). CB phantom was found optimally visualized by ME collimator at 146, rather than 141, keV. In clinic, 99m Tc-Sn-colloid 40 MBq was given near the tumor of a patient and imaging was done with or without the lead shield with FORTE equipped with low energy high-resolution or ME collimator for their comparison. The present ME method described above set at 146 keV was found to give the image with excellent contrast and without false positive when compared with the lead shield method hitherto. (R.T.)

  1. Functional validation and comparison framework for EIT lung imaging.

    Directory of Open Access Journals (Sweden)

    Bartłomiej Grychtol

    Full Text Available INTRODUCTION: Electrical impedance tomography (EIT is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen. In this way, large and discrete shifts in global and regional lung air content were elicited. METHODS: We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm, GREIT (a more recent consensus algorithm for lung imaging, truncated singular value decomposition (TSVD, several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data. RESULTS AND CONCLUSIONS: Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT.

  2. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  3. Smartphones as image processing systems for prosthetic vision.

    Science.gov (United States)

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  4. Processing and validation of intermediate energy evaluated data files

    International Nuclear Information System (INIS)

    2000-01-01

    Current accelerator-driven and other intermediate energy technologies require accurate nuclear data to model the performance of the target/blanket assembly, neutron production, activation, heating and damage. In a previous WPEC subgroup, SG13 on intermediate energy nuclear data, various aspects of intermediate energy data, such as nuclear data needs, experiments, model calculations and file formatting issues were investigated and categorized to come to a joint evaluation effort. The successor of SG13, SG14 on the processing and validation of intermediate energy evaluated data files, goes one step further. The nuclear data files that have been created with the aforementioned information need to be processed and validated in order to be applicable in realistic intermediate energy simulations. We emphasize that the work of SG14 excludes the 0-20 MeV data part of the neutron evaluations, which is supposed to be covered elsewhere. This final report contains the following sections: section 2: a survey of the data files above 20 MeV that have been considered for validation in SG14; section 3: a summary of the review of the 150 MeV intermediate energy data files for ENDF/B-VI and, more briefly, the other libraries; section 4: validation of the data library against an integral experiment with MCNPX; section 5: conclusions. (author)

  5. Image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Manninen, H.; Partanen, K.; Lehtovirta, J.; Matsi, P.; Soimakallio, S.

    1992-01-01

    The usefulness of digital image processing of chest radiographs was evaluated in a clinical study. In 54 patients, chest radiographs in the posteroanterior projection were obtained by both 14 inch digital image intensifier equipment and the conventional screen-film technique. The digital radiographs (512x512 image format) viewed on a 625 line monitor were processed in 3 different ways: 1.standard display; 2.digital edge enhancement for the standard display; 3.inverse intensity display. The radiographs were interpreted independently by 3 radiologists. Diagnoses were confirmed by CT, follow-up radiographs and clinical records. Chest abnormalities of the films analyzed included 21 primary lung tumors, 44 pulmonary nodules, 16 cases with mediastinal disease, 17 with pneumonia /atelectasis. Interstitial lung disease, pleural plaques, and pulmonary emphysema were found in 30, 18 and 19 cases respectively. Sensitivity of conventional radiography when averaged overall findings was better than that of digital techniques (P<0.001). Differences in diagnostic accuracy measured by sensitivity and specificity between the 3 digital display modes were small. Standard image display showed better sensitivity for pulmonary nodules (0.74 vs 0.66; P<0.05) but poorer specificity for pulmonary emphysema (0.85 vs 0.93; P<0.05) compared with inverse intensity display. It is concluded that when using 512x512 image format, the routine use of digital edge enhancement and tone reversal at digital chest radiographs is not warranted. (author). 12 refs.; 4 figs.; 2 tabs

  6. Processing Infrared Images For Fire Management Applications

    Science.gov (United States)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  7. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  8. [Digital thoracic radiology: devices, image processing, limits].

    Science.gov (United States)

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  9. Interaction of image noise, spatial resolution, and low contrast fine detail preservation in digital image processing

    Science.gov (United States)

    Artmann, Uwe; Wueller, Dietmar

    2009-01-01

    We present a method to improve the validity of noise and resolution measurements on digital cameras. If non-linear adaptive noise reduction is part of the signal processing in the camera, the measurement results for image noise and spatial resolution can be good, while the image quality is low due to the loss of fine details and a watercolor like appearance of the image. To improve the correlation between objective measurement and subjective image quality we propose to supplement the standard test methods with an additional measurement of the texture preserving capabilities of the camera. The proposed method uses a test target showing white Gaussian noise. The camera under test reproduces this target and the image is analyzed. We propose to use the kurtosis of the derivative of the image as a metric for the texture preservation of the camera. Kurtosis is a statistical measure for the closeness of a distribution compared to the Gaussian distribution. It can be shown, that the distribution of digital values in the derivative of the image showing the chart becomes the more leptokurtic (increased kurtosis) the stronger the noise reduction has an impact on the image.

  10. Development and preliminary validation of flux map processing code MAPLE

    International Nuclear Information System (INIS)

    Li Wenhuai; Zhang Xiangju; Dang Zhen; Chen Ming'an; Lu Haoliang; Li Jinggang; Wu Yuanbao

    2013-01-01

    The self-reliant flux map processing code MAPLE was developed by China General Nuclear Power Corporation (CGN). Weight coefficient method (WCM), polynomial expand method (PEM) and thin plane spline (TPS) method were applied to fit the deviation between measured and predicted detector signal results for two-dimensional radial plane, to interpolate or extrapolate the non-instrumented location deviation. Comparison of results in the test cases shows that the TPS method can better capture the information of curved fitting lines than the other methods. The measured flux map data of the Lingao Nuclear Power Plant were processed using MAPLE as validation test cases, combined with SMART code. Validation results show that the calculation results of MAPLE are reasonable and satisfied. (authors)

  11. Optimized energy of spectral CT for infarct imaging: Experimental validation with human validation.

    Science.gov (United States)

    Sandfort, Veit; Palanisamy, Srikanth; Symons, Rolf; Pourmorteza, Amir; Ahlman, Mark A; Rice, Kelly; Thomas, Tom; Davies-Venn, Cynthia; Krauss, Bernhard; Kwan, Alan; Pandey, Ankur; Zimmerman, Stefan L; Bluemke, David A

    Late contrast enhancement visualizes myocardial infarction, but the contrast to noise ratio (CNR) is low using conventional CT. The aim of this study was to determine if spectral CT can improve imaging of myocardial infarction. A canine model of myocardial infarction was produced in 8 animals (90-min occlusion, reperfusion). Later, imaging was performed after contrast injection using CT at 90 kVp/150 kVpSn. The following reconstructions were evaluated: Single energy 90 kVp, mixed, iodine map, multiple monoenergetic conventional and monoenergetic noise optimized reconstructions. Regions of interest were measured in infarct and remote regions to calculate contrast to noise ratio (CNR) and Bhattacharya distance (a metric of the differentiation between regions). Blinded assessment of image quality was performed. The same reconstruction methods were applied to CT scans of four patients with known infarcts. For animal studies, the highest CNR for infarct vs. myocardium was achieved in the lowest keV (40 keV) VMo images (CNR 4.42, IQR 3.64-5.53), which was superior to 90 kVp, mixed and iodine map (p = 0.008, p = 0.002, p energy in conjunction with noise-optimized monoenergetic post-processing improves CNR of myocardial infarct delineation by approximately 20-25%. Published by Elsevier Inc.

  12. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    Science.gov (United States)

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  13. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    Science.gov (United States)

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  14. SENTINEL-2 Level 1 Products and Image Processing Performances

    Science.gov (United States)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    stringent image quality requirements are also described, in particular the geo-location accuracy for both absolute (better than 12.5 m) and multi-temporal (better than 0.3 pixels) cases. Then, the prototyped image processing techniques (both radiometric and geometric) will be addressed. The radiometric corrections will be first introduced. They consist mainly in dark signal and detector relative sensitivity correction, crosstalk correction and MTF restoration. Then, a special focus will be done on the geometric corrections. In particular the innovative method of automatic enhancement of the geometric physical model will be detailed. This method takes advantage of a Global Reference Image database, perfectly geo-referenced, to correct the physical geometric model of each image taken. The processing is based on an automatic image matching process which provides accurate ground control points between a given band of the image to refine and a reference image, allowing to dynamically calibrate the viewing model. The generation of the Global Reference Image database made of Sentinel-2 pre-calibrated mono-spectral images will be also addressed. In order to perform independent validation of the prototyping activity, an image simulator dedicated to Sentinel-2 has been set up. Thanks to this, a set of images have been simulated from various source images and combining different acquisition conditions and landscapes (mountains, deserts, cities …). Given disturbances have been also simulated so as to estimate the end to end performance of the processing chain. Finally, the radiometric and geometric performances obtained by the prototype will be presented. In particular, the geo-location performance of the level-1C products which widely fulfils the image quality requirements will be provided.

  15. SENTINEL-2 LEVEL 1 PRODUCTS AND IMAGE PROCESSING PERFORMANCES

    Directory of Open Access Journals (Sweden)

    S. J. Baillarin

    2012-07-01

    . The stringent image quality requirements are also described, in particular the geo-location accuracy for both absolute (better than 12.5 m and multi-temporal (better than 0.3 pixels cases. Then, the prototyped image processing techniques (both radiometric and geometric will be addressed. The radiometric corrections will be first introduced. They consist mainly in dark signal and detector relative sensitivity correction, crosstalk correction and MTF restoration. Then, a special focus will be done on the geometric corrections. In particular the innovative method of automatic enhancement of the geometric physical model will be detailed. This method takes advantage of a Global Reference Image database, perfectly geo-referenced, to correct the physical geometric model of each image taken. The processing is based on an automatic image matching process which provides accurate ground control points between a given band of the image to refine and a reference image, allowing to dynamically calibrate the viewing model. The generation of the Global Reference Image database made of Sentinel-2 pre-calibrated mono-spectral images will be also addressed. In order to perform independent validation of the prototyping activity, an image simulator dedicated to Sentinel-2 has been set up. Thanks to this, a set of images have been simulated from various source images and combining different acquisition conditions and landscapes (mountains, deserts, cities …. Given disturbances have been also simulated so as to estimate the end to end performance of the processing chain. Finally, the radiometric and geometric performances obtained by the prototype will be presented. In particular, the geo-location performance of the level-1C products which widely fulfils the image quality requirements will be provided.

  16. ARMA processing for NDE ultrasonic imaging

    International Nuclear Information System (INIS)

    Pao, Y.H.; El-Sherbini, A.

    1984-01-01

    This chapter describes a new method for acoustic image reconstruction for an active multiple sensor system operating in the reflection mode in the Fresnel region. The method is based on the use of an ARMA model for the reconstruction process. Algorithms for estimating the model parameters are presented and computer simulation results are shown. The AR coefficients are obtained independently of the MA coefficients. It is shown that when the ARMA reconstruction method is augmented with the multifrequency approach, it can provide a three-dimensional reconstructed image with high lateral and range resolutions, high signal to noise ratio and reduced sidelobe levels. The proposed ARMA reconstruction method results in high quality images and better performance than that obtainable with conventional methods. The advantages of the method are very high lateral resolution with a limited number of sensors, reduced sidelobes level, and high signal to noise ratio

  17. MIDAS - ESO's new image processing system

    Science.gov (United States)

    Banse, K.; Crane, P.; Grosbol, P.; Middleburg, F.; Ounnas, C.; Ponz, D.; Waldthausen, H.

    1983-03-01

    The Munich Image Data Analysis System (MIDAS) is an image processing system whose heart is a pair of VAX 11/780 computers linked together via DECnet. One of these computers, VAX-A, is equipped with 3.5 Mbytes of memory, 1.2 Gbytes of disk storage, and two tape drives with 800/1600 bpi density. The other computer, VAX-B, has 4.0 Mbytes of memory, 688 Mbytes of disk storage, and one tape drive with 1600/6250 bpi density. MIDAS is a command-driven system geared toward the interactive user. The type and number of parameters in a command depends on the unique parameter invoked. MIDAS is a highly modular system that provides building blocks for the undertaking of more sophisticated applications. Presently, 175 commands are available. These include the modification of the color-lookup table interactively, to enhance various image features, and the interactive extraction of subimages.

  18. Illuminating magma shearing processes via synchrotron imaging

    Science.gov (United States)

    Lavallée, Yan; Cai, Biao; Coats, Rebecca; Kendrick, Jackie E.; von Aulock, Felix W.; Wallace, Paul A.; Le Gall, Nolwenn; Godinho, Jose; Dobson, Katherine; Atwood, Robert; Holness, Marian; Lee, Peter D.

    2017-04-01

    Our understanding of geomaterial behaviour and processes has long fallen short due to inaccessibility into material as "something" happens. In volcanology, research strategies have increasingly sought to illuminate the subsurface of materials at all scales, from the use of muon tomography to image the inside of volcanoes to the use of seismic tomography to image magmatic bodies in the crust, and most recently, we have added synchrotron-based x-ray tomography to image the inside of material as we test it under controlled conditions. Here, we will explore some of the novel findings made on the evolution of magma during shearing. These will include observations and discussions of magma flow and failure as well as petrological reaction kinetics.

  19. GPM GROUND VALIDATION CONICAL SCANNING MILLIMETER-WAVE IMAGING RADIOMETER (COSMIR) MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Conical Scanning Millimeter-wave Imaging Radiometer (COSMIR) MC3E dataset used the Conical Scanning Millimeter-wave Imaging Radiometer...

  20. GPM GROUND VALIDATION CONICAL SCANNING MILLIMETER-WAVE IMAGING RADIOMETER (COSMIR) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Conical Scanning Millimeter-wave Imaging Radiometer (COSMIR) GCPEx dataset used the Conical Scanning Millimeter-wave Imaging Radiometer...

  1. Attitude Patterns and the Production of Original Verbal Images: A Study in Construct Validity

    Science.gov (United States)

    Khatena, Joe; Torrance, E. Paul

    1971-01-01

    The Runner Studies of Attitude Patterns, a personality inventory, was used as the criterion to determine construct validity of Sounds and Images and Onomatopoeia and Images, two tests of verbal originality. (KW)

  2. Valid MR imaging predictors of prior knee arthroscopy

    International Nuclear Information System (INIS)

    Discepola, Federico; Le, Huy B.Q.; Park, John S.; Clopton, Paul; Knoll, Andrew N.; Austin, Matthew J.; Resnick, Donald L.

    2012-01-01

    To determine whether fibrosis of the medial patellar reticulum (MPR), lateral patellar reticulum (LPR), deep medial aspect of Hoffa's fat pad (MDH), or deep lateral aspect of Hoffa's fat pad (LDH) is a valid predictor of prior knee arthroscopy. Institutional review board approval and waiver of informed consent were obtained for this HIPPA-compliant study. Initially, fibrosis of the MPR, LPR, MDH, or LDH in MR imaging studies of 50 patients with prior knee arthroscopy and 100 patients without was recorded. Subsequently, two additional radiologists, blinded to clinical data, retrospectively and independently recorded the presence of fibrosis of the MPR in 50 patients with prior knee arthroscopy and 50 without. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for detecting the presence of fibrosis in the MPR were calculated. κ statistics were used to analyze inter-observer agreement. Fibrosis of each of the regions examined during the first portion of the study showed a significant association with prior knee arthroscopy (p < 0.005 for each). A patient with fibrosis of the MPR, LDH, or LPR was 45.5, 9, or 3.7 times more likely, respectively, to have had a prior knee arthroscopy. Logistic regression analysis indicated that fibrosis of the MPR supplanted the diagnostic utility of identifying fibrosis of the LPR, LDH, or MDH, or combinations of these (p ≥ 0.09 for all combinations). In the second portion of the study, fibrosis of the MPR demonstrated a mean sensitivity of 82%, specificity of 72%, PPV of 75%, NPV of 81%, and accuracy of 77% for predicting prior knee arthroscopy. Analysis of MR images can be used to determine if a patient has had prior knee arthroscopy by identifying fibrosis of the MPR, LPR, MDH, or LDH. Fibrosis of the MPR was the strongest predictor of prior knee arthroscopy. (orig.)

  3. Valid MR imaging predictors of prior knee arthroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Discepola, Federico; Le, Huy B.Q. [McGill University Health Center, Jewsih General Hospital, Division of Musculoskeletal Radiology, Montreal, Quebec (Canada); Park, John S. [Annapolis Radiology Associates, Division of Musculoskeletal Radiology, Annapolis, MD (United States); Clopton, Paul; Knoll, Andrew N.; Austin, Matthew J.; Resnick, Donald L. [University of California San Diego (UCSD), Division of Musculoskeletal Radiology, San Diego, CA (United States)

    2012-01-15

    To determine whether fibrosis of the medial patellar reticulum (MPR), lateral patellar reticulum (LPR), deep medial aspect of Hoffa's fat pad (MDH), or deep lateral aspect of Hoffa's fat pad (LDH) is a valid predictor of prior knee arthroscopy. Institutional review board approval and waiver of informed consent were obtained for this HIPPA-compliant study. Initially, fibrosis of the MPR, LPR, MDH, or LDH in MR imaging studies of 50 patients with prior knee arthroscopy and 100 patients without was recorded. Subsequently, two additional radiologists, blinded to clinical data, retrospectively and independently recorded the presence of fibrosis of the MPR in 50 patients with prior knee arthroscopy and 50 without. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for detecting the presence of fibrosis in the MPR were calculated. {kappa} statistics were used to analyze inter-observer agreement. Fibrosis of each of the regions examined during the first portion of the study showed a significant association with prior knee arthroscopy (p < 0.005 for each). A patient with fibrosis of the MPR, LDH, or LPR was 45.5, 9, or 3.7 times more likely, respectively, to have had a prior knee arthroscopy. Logistic regression analysis indicated that fibrosis of the MPR supplanted the diagnostic utility of identifying fibrosis of the LPR, LDH, or MDH, or combinations of these (p {>=} 0.09 for all combinations). In the second portion of the study, fibrosis of the MPR demonstrated a mean sensitivity of 82%, specificity of 72%, PPV of 75%, NPV of 81%, and accuracy of 77% for predicting prior knee arthroscopy. Analysis of MR images can be used to determine if a patient has had prior knee arthroscopy by identifying fibrosis of the MPR, LPR, MDH, or LDH. Fibrosis of the MPR was the strongest predictor of prior knee arthroscopy. (orig.)

  4. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Validation of algorithm used for location of electrodes in CT images

    International Nuclear Information System (INIS)

    Bustos, J; Graffigna, J P; Isoardi, R; Gómez, M E; Romo, R

    2013-01-01

    It has been implement a noninvasive technique to detect and delineate the focus of electric discharge in patients with mono-focal epilepsy. For the detection of these sources it has used electroencephalogram (EEG) with 128 electrodes cap. With EEG data and electrodes position, it is possible locate this focus on MR volumes. The technique locates the electrodes on CT volumes using image processing algorithms to obtain descriptors of electrodes, as centroid, which determines its position in space. Finally these points are transformed into the coordinate space of MR through a registration for a better understanding by the physician. Due to the medical implications of this technique is of utmost importance to validate the results of the detection of electrodes coordinates. For that, this paper present a comparison between the actual values measured physically (measures including electrode size and spatial location) and the values obtained in the processing of CT and MR images

  6. Spent Nuclear Fuel (SNF) Project Design Verification and Validation Process

    International Nuclear Information System (INIS)

    OLGUIN, L.J.

    2000-01-01

    This document provides a description of design verification and validation activities implemented by the Spent Nuclear Fuel (SNF) Project. During the execution of early design verification, a management assessment (Bergman, 1999) and external assessments on configuration management (Augustenburg, 1999) and testing (Loscoe, 2000) were conducted and identified potential uncertainties in the verification process. This led the SNF Chief Engineer to implement corrective actions to improve process and design products. This included Design Verification Reports (DVRs) for each subproject, validation assessments for testing, and verification of the safety function of systems and components identified in the Safety Equipment List to ensure that the design outputs were compliant with the SNF Technical Requirements. Although some activities are still in progress, the results of the DVR and associated validation assessments indicate that Project requirements for design verification are being effectively implemented. These results have been documented in subproject-specific technical documents (Table 2). Identified punch-list items are being dispositioned by the Project. As these remaining items are closed, the technical reports (Table 2) will be revised and reissued to document the results of this work

  7. Automatic seed picking for brachytherapy postimplant validation with 3D CT images.

    Science.gov (United States)

    Zhang, Guobin; Sun, Qiyuan; Jiang, Shan; Yang, Zhiyong; Ma, Xiaodong; Jiang, Haisong

    2017-11-01

    Postimplant validation is an indispensable part in the brachytherapy technique. It provides the necessary feedback to ensure the quality of operation. The ability to pick implanted seed relates directly to the accuracy of validation. To address it, an automatic approach is proposed for picking implanted brachytherapy seeds in 3D CT images. In order to pick seed configuration (location and orientation) efficiently, the approach starts with the segmentation of seed from CT images using a thresholding filter which based on gray-level histogram. Through the process of filtering and denoising, the touching seed and single seed are classified. The true novelty of this approach is found in the application of the canny edge detection and improved concave points matching algorithm to separate touching seeds. Through the computation of image moments, the seed configuration can be determined efficiently. Finally, two different experiments are designed to verify the performance of the proposed approach: (1) physical phantom with 60 model seeds, and (2) patient data with 16 cases. Through assessment of validated results by a medical physicist, the proposed method exhibited promising results. Experiment on phantom demonstrates that the error of seed location and orientation is within ([Formula: see text]) mm and ([Formula: see text])[Formula: see text], respectively. In addition, the most seed location and orientation error is controlled within 0.8 mm and 3.5[Formula: see text] in all cases, respectively. The average process time of seed picking is 8.7 s per 100 seeds. In this paper, an automatic, efficient and robust approach, performed on CT images, is proposed to determine the implanted seed location as well as orientation in a 3D workspace. Through the experiments with phantom and patient data, this approach also successfully exhibits good performance.

  8. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  9. Best practice strategies for validation of micro moulding process simulation

    DEFF Research Database (Denmark)

    Costa, Franco; Tosello, Guido; Whiteside, Ben

    2009-01-01

    The use of simulation for injection moulding design is a powerful tool which can be used up-front to avoid costly tooling modifications and reduce the number of mould trials. However, the accuracy of the simulation results depends on many component technologies and information, some of which can...... be easily controlled or known by the simulation analyst and others which are not easily known. For this reason, experimental validation studies are an important tool for establishing best practice methodologies for use during analysis set up on all future design projects. During the validation studies......, detailed information about the moulding process is gathered and used to establish these methodologies. Whereas in routine design projects, these methodologies are then relied on to provide efficient but reliable working practices. Data analysis and simulations on preliminary micro-moulding experiments have...

  10. Constraint processing in our extensible language for cooperative imaging system

    Science.gov (United States)

    Aoki, Minoru; Murao, Yo; Enomoto, Hajime

    1996-02-01

    The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.

  11. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    International Nuclear Information System (INIS)

    Sensakovic, William F.; O'Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-01-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA"2 by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing

  12. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)

    2016-10-15

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  13. Automated synthesis of image processing procedures using AI planning techniques

    Science.gov (United States)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  14. Application of ultrasound processed images in space: assessing diffuse affectations

    Science.gov (United States)

    Pérez-Poch, A.; Bru, C.; Nicolau, C.

    The purpose of this study was to evaluate diffuse affectations in the liver using texture image processing techniques. Ultrasound diagnose equipments are the election of choice to be used in space environments as they are free from hazardous effects on health. However, due to the need for highly trained radiologists to assess the images, this imaging method is mainly applied on focal lesions rather than on non-focal ones. We have conducted a clinical study on 72 patients with different degrees of chronic hepatopaties and a group of control of 18 individuals. All subjects' clinical reports and results of biopsies were compared to the degree of affectation calculated by our computer system , thus validating the method. Full statistical results are given in the present paper showing a good correlation (r=0.61) between pathologist's report and analysis of the heterogenicity of the processed images from the liver. This computer system to analyze diffuse affectations may be used in-situ or via telemedicine to the ground.

  15. Use of fuzzy logic in signal processing and validation

    International Nuclear Information System (INIS)

    Heger, A.S.; Alang-Rashid, N.K.; Holbert, K.E.

    1993-01-01

    The advent of fuzzy logic technology has afforded another opportunity to reexamine the signal processing and validation process (SPV). The features offered by fuzzy logic can lend themselves to a more reliable and perhaps fault-tolerant approach to SPV. This is particularly attractive to complex system operations, where optimal control for safe operation depends on reliable input data. The reason for the use of fuzzy logic as the tool for SPV is its ability to transform information from the linguistic domain to a mathematical domain for processing and then transformation of its result back into the linguistic domain for presentation. To ensure the safe and optimal operation of a nuclear plant, for example, reliable and valid data must be available to the human and computer operators. Based on these input data, the operators determine the current state of the power plant and project corrective actions for future states. This determination is based on available data and the conceptual and mathematical models for the plant. A fault-tolerant SPV based on fuzzy logic can help the operators meet the objective of effective, efficient, and safe operation of the nuclear power plant. The ultimate product of this project will be a code that will assist plant operators in making informed decisions under uncertain conditions when conflicting signals may be present

  16. Automatic Road Pavement Assessment with Image Processing: Review and Comparison

    Directory of Open Access Journals (Sweden)

    Sylvie Chambon

    2011-01-01

    Full Text Available In the field of noninvasive sensing techniques for civil infrastructures monitoring, this paper addresses the problem of crack detection, in the surface of the French national roads, by automatic analysis of optical images. The first contribution is a state of the art of the image-processing tools applied to civil engineering. The second contribution is about fine-defect detection in pavement surface. The approach is based on a multi-scale extraction and a Markovian segmentation. Third, an evaluation and comparison protocol which has been designed for evaluating this difficult task—the road pavement crack detection—is introduced. Finally, the proposed method is validated, analysed, and compared to a detection approach based on morphological tools.

  17. Numerical Validation of Chemical Compositional Model for Wettability Alteration Processes

    Science.gov (United States)

    Bekbauov, Bakhbergen; Berdyshev, Abdumauvlen; Baishemirov, Zharasbek; Bau, Domenico

    2017-12-01

    Chemical compositional simulation of enhanced oil recovery and surfactant enhanced aquifer remediation processes is a complex task that involves solving dozens of equations for all grid blocks representing a reservoir. In the present work, we perform a numerical validation of the newly developed mathematical formulation which satisfies the conservation laws of mass and energy and allows applying a sequential solution approach to solve the governing equations separately and implicitly. Through its application to the numerical experiment using a wettability alteration model and comparisons with existing chemical compositional model's numerical results, the new model has proven to be practical, reliable and stable.

  18. Imprecise Arithmetic for Low Power Image Processing

    DEFF Research Database (Denmark)

    Albicocco, Pietro; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2012-01-01

    Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, additio...... and multiplication, in an imprecise manner by simplifying the hardware implementation. With the proposed ”sloppy” operations, we obtain a reduction in delay, area and power dissipation, and the error introduced is still acceptable for applications such as image processing.......Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, addition...

  19. Validation of diffusion tensor MRI measurements of cardiac microstructure with structure tensor synchrotron radiation imaging.

    Science.gov (United States)

    Teh, Irvin; McClymont, Darryl; Zdora, Marie-Christine; Whittington, Hannah J; Davidoiu, Valentina; Lee, Jack; Lygate, Craig A; Rau, Christoph; Zanette, Irene; Schneider, Jürgen E

    2017-03-10

    Diffusion tensor imaging (DTI) is widely used to assess tissue microstructure non-invasively. Cardiac DTI enables inference of cell and sheetlet orientations, which are altered under pathological conditions. However, DTI is affected by many factors, therefore robust validation is critical. Existing histological validation is intrinsically flawed, since it requires further tissue processing leading to sample distortion, is routinely limited in field-of-view and requires reconstruction of three-dimensional volumes from two-dimensional images. In contrast, synchrotron radiation imaging (SRI) data enables imaging of the heart in 3D without further preparation following DTI. The objective of the study was to validate DTI measurements based on structure tensor analysis of SRI data. One isolated, fixed rat heart was imaged ex vivo with DTI and X-ray phase contrast SRI, and reconstructed at 100 μm and 3.6 μm isotropic resolution respectively. Structure tensors were determined from the SRI data and registered to the DTI data. Excellent agreement in helix angles (HA) and transverse angles (TA) was observed between the DTI and structure tensor synchrotron radiation imaging (STSRI) data, where HA DTI-STSRI  = -1.4° ± 23.2° and TA DTI-STSRI  = -1.4° ± 35.0° (mean ± 1.96 standard deviation across all voxels in the left ventricle). STSRI confirmed that the primary eigenvector of the diffusion tensor corresponds with the cardiomyocyte long-axis across the whole myocardium. We have used STSRI as a novel and high-resolution gold standard for the validation of DTI, allowing like-with-like comparison of three-dimensional tissue structures in the same intact heart free of distortion. This represents a critical step forward in independently verifying the structural basis and informing the interpretation of cardiac DTI data, thereby supporting the further development and adoption of DTI in structure-based electro-mechanical modelling and routine clinical

  20. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  1. 3D segmentation of scintigraphic images with validation on realistic GATE simulations

    International Nuclear Information System (INIS)

    Burg, Samuel

    2011-01-01

    The objective of this thesis was to propose a new 3D segmentation method for scintigraphic imaging. The first part of the work was to simulate 3D volumes with known ground truth in order to validate a segmentation method over other. Monte-Carlo simulations were performed using the GATE software (Geant4 Application for Emission Tomography). For this, we characterized and modeled the gamma camera 'γ Imager' Biospace"T"M by comparing each measurement from a simulated acquisition to his real equivalent. The 'low level' segmentation tool that we have developed is based on a modeling of the levels of the image by probabilistic mixtures. Parameters estimation is done by an SEM algorithm (Stochastic Expectation Maximization). The 3D volume segmentation is achieved by an ICM algorithm (Iterative Conditional Mode). We compared the segmentation based on Gaussian and Poisson mixtures to segmentation by thresholding on the simulated volumes. This showed the relevance of the segmentations obtained using probabilistic mixtures, especially those obtained with Poisson mixtures. Those one has been used to segment real "1"8FDG PET images of the brain and to compute descriptive statistics of the different tissues. In order to obtain a 'high level' segmentation method and find anatomical structures (necrotic part or active part of a tumor, for example), we proposed a process based on the point processes formalism. A feasibility study has yielded very encouraging results. (author) [fr

  2. Funding for the 2ND IAEA technical meeting on fusion data processing, validation and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Greenwald, Martin

    2017-06-02

    The International Atomic Energy Agency (IAEA) will organize the second Technical Meeting on Fusion Da Processing, Validation and Analysis from 30 May to 02 June, 2017, in Cambridge, MA USA. The meeting w be hosted by the MIT Plasma Science and Fusion Center (PSFC). The objective of the meeting is to provide a platform where a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolation needs to next step fusion devices such as ITER. The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucial for a knowledge based understanding of the physical processes governing the dynamics of these plasmas. The meeting will aim at fostering, in particular, discussions of research and development results that set out or underline trends observed in the current major fusion confinement devices. General information on the IAEA, including its mission and organization, can be found at the IAEA websit Uncertainty quantification (UQ) Model selection, validation, and verification (V&V) Probability theory and statistical analysis Inverse problems & equilibrium reconstru ction Integrated data analysis Real time data analysis Machine learning Signal/image proc essing & pattern recognition Experimental design and synthetic diagnostics Data management

  3. Digital signal and image processing using Matlab

    CERN Document Server

    Blanchet , Gérard

    2015-01-01

    The most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals, the theory being supported by exercises and computer simulations relating to real applications.   More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.  Following on from the first volume, this second installation takes a more practical stance, provi

  4. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  5. Digital signal and image processing using MATLAB

    CERN Document Server

    Blanchet , Gérard

    2014-01-01

    This fully revised and updated second edition presents the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications. More than 200 programs and functions are provided in the MATLABÒ language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject. This fully revised new edition updates : - the

  6. Image processing in 60Co container inspection system

    International Nuclear Information System (INIS)

    Wu Zhifang; Zhou Liye; Wang Liqiang; Liu Ximing

    1999-01-01

    The authors analyzes the features of 60 Co container inspection image, the design of several special processing methods for container image and some normal processing methods for two-dimensional digital image, including gray enhancement, pseudo-enhancement, space filter, edge enhancement, geometry process, etc. It gives out the way to carry out the above mentioned process in Windows 95 or Win NT. It discusses some ways to improve the image processing speed on microcomputer and good results were obtained

  7. Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Dam, Erik Bjørnager

    2018-01-01

    Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic...... image analysis methods used in longitudinal studies. The implication is increased measurement variation and a risk of bias in the estimations (e.g. in the volume change for a structure). We proposed two quite different approaches for scanner drift normalization and demonstrated the performance...... for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured...

  8. Image processing to optimize wave energy converters

    Science.gov (United States)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  9. Semantic orchestration of image processing services for environmental analysis

    Science.gov (United States)

    Ranisavljević, Élisabeth; Devin, Florent; Laffly, Dominique; Le Nir, Yannick

    2013-09-01

    In order to analyze environmental dynamics, a major process is the classification of the different phenomena of the site (e.g. ice and snow for a glacier). When using in situ pictures, this classification requires data pre-processing. Not all the pictures need the same sequence of processes depending on the disturbances. Until now, these sequences have been done manually, which restricts the processing of large amount of data. In this paper, we present how to realize a semantic orchestration to automate the sequencing for the analysis. It combines two advantages: solving the problem of the amount of processing, and diversifying the possibilities in the data processing. We define a BPEL description to express the sequences. This BPEL uses some web services to run the data processing. Each web service is semantically annotated using an ontology of image processing. The dynamic modification of the BPEL is done using SPARQL queries on these annotated web services. The results obtained by a prototype implementing this method validate the construction of the different workflows that can be applied to a large number of pictures.

  10. Processing computed tomography images by using personal computer

    International Nuclear Information System (INIS)

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.

    1994-01-01

    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  11. Low level image processing techniques using the pipeline image processing engine in the flight telerobotic servicer

    Science.gov (United States)

    Nashman, Marilyn; Chaconas, Karen J.

    1988-01-01

    The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.

  12. A Self-Image Questionnaire for Young Adolescents (SIQYA): Reliability and Validity Studies.

    Science.gov (United States)

    Peterson, Anne C.; And Others

    1984-01-01

    The Self-Image Questionnaire for Young Adolescents (SIQYA), an adaptation of the Offer Self-Image Questionnaire (OSIQ), designed to measure aspects of self-image among young adolescents, was administered to two groups of sixth graders. The development of the SIQYA is described and reliability and validity results are presented. (EGS)

  13. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  14. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  15. Compressive strength test for cemented waste forms: validation process

    International Nuclear Information System (INIS)

    Haucz, Maria Judite A.; Candido, Francisco Donizete; Seles, Sandro Rogerio

    2007-01-01

    In the Cementation Laboratory (LABCIM), of the Development Centre of the Nuclear Technology (CNEN/CDTN-MG), hazardous/radioactive wastes are incorporated in cement, to transform them into monolithic products, preventing or minimizing the contaminant release to the environment. The compressive strength test is important to evaluate the cemented product quality, in which it is determined the compression load necessary to rupture the cemented waste form. In LABCIM a specific procedure was developed to determine the compressive strength of cement waste forms based on the Brazilian Standard NBR 7215. The accreditation of this procedure is essential to assure reproductive and accurate results in the evaluation of these products. To achieve this goal the Laboratory personal implemented technical and administrative improvements in accordance with the NBR ISO/IEC 17025 standard 'General requirements for the competence of testing and calibration laboratories'. As the developed procedure was not a standard one the norm ISO/IEC 17025 requests its validation. There are some methodologies to do that. In this paper it is described the current status of the accreditation project, especially the validation process of the referred procedure and its results. (author)

  16. Variational PDE Models in Image Processing

    National Research Council Canada - National Science Library

    Chan, Tony F; Shen, Jianhong; Vese, Luminita

    2002-01-01

    .... These include astronomy and aerospace exploration, medical imaging, molecular imaging, computer graphics, human and machine vision, telecommunication, auto-piloting, surveillance video, and biometric...

  17. Application of Java technology in radiation image processing

    International Nuclear Information System (INIS)

    Cheng Weifeng; Li Zheng; Chen Zhiqiang; Zhang Li; Gao Wenhuan

    2002-01-01

    The acquisition and processing of radiation image plays an important role in modern application of civil nuclear technology. The author analyzes the rationale of Java image processing technology which includes Java AWT, Java 2D and JAI. In order to demonstrate applicability of Java technology in field of image processing, examples of application of JAI technology in processing of radiation images of large container have been given

  18. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing

    2008-01-01

    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  19. Validation of models in an imaging infrared simulation

    CSIR Research Space (South Africa)

    Willers, C

    2007-10-01

    Full Text Available threeprocessesfortransformingtheinformationbetweentheentities. Reality/ Problem Entity Conceptual Model Computerized Model Model Validation ModelVerification Model Qualification Computer Implementation Analysisand Modelling Simulationand Experimentation “Substantiationthata....C.Refsgaard ,ModellingGuidelines-terminology andguidingprinciples, AdvancesinWaterResources, Vol27,No1,January2004,?pp.71-82(12),Elsevier. et.al. [5]N.Oreskes,et.al.,Verification,Validation,andConfirmationof NumericalModelsintheEarthSciences,Science,Vol263, Number...

  20. Design and validation of Segment - freely available software for cardiovascular image analysis

    International Nuclear Information System (INIS)

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-01

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page (http://segment.heiberg.se). Segment

  1. Validation of the Gatortail method for accurate sizing of pulmonary vessels from 3D medical images.

    Science.gov (United States)

    O'Dell, Walter G; Gormaley, Anne K; Prida, David A

    2017-12-01

    Detailed characterization of changes in vessel size is crucial for the diagnosis and management of a variety of vascular diseases. Because clinical measurement of vessel size is typically dependent on the radiologist's subjective interpretation of the vessel borders, it is often prone to high inter- and intra-user variability. Automatic methods of vessel sizing have been developed for two-dimensional images but a fully three-dimensional (3D) method suitable for vessel sizing from volumetric X-ray computed tomography (CT) or magnetic resonance imaging has heretofore not been demonstrated and validated robustly. In this paper, we refined and objectively validated Gatortail, a method that creates a mathematical geometric 3D model of each branch in a vascular tree, simulates the appearance of the virtual vascular tree in a 3D CT image, and uses the similarity of the simulated image to a patient's CT scan to drive the optimization of the model parameters, including vessel size, to match that of the patient. The method was validated with a 2-dimensional virtual tree structure under deformation, and with a realistic 3D-printed vascular phantom in which the diameter of 64 branches were manually measured 3 times each. The phantom was then scanned on a conventional clinical CT imaging system and the images processed with the in-house software to automatically segment and mathematically model the vascular tree, label each branch, and perform the Gatortail optimization of branch size and trajectory. Previously proposed methods of vessel sizing using matched Gaussian filters and tubularity metrics were also tested. The Gatortail method was then demonstrated on the pulmonary arterial tree segmented from a human volunteer's CT scan. The standard deviation of the difference between the manually measured and Gatortail-based radii in the 3D physical phantom was 0.074 mm (0.087 in-plane pixel units for image voxels of dimension 0.85 × 0.85 × 1.0 mm) over the 64 branches

  2. Effects of image processing on the detective quantum efficiency

    Science.gov (United States)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  3. Image processing system for videotape review

    International Nuclear Information System (INIS)

    Bettendroffer, E.

    1988-01-01

    In a nuclear plant, the areas in which fissile materials are stored or handled, have to be monitored continuously. One method of surveillance is to record pictures of TV cameras with determined time intervals on special video recorders. The 'time lapse' recorded tape is played back at normal speed and an inspector checks visually the pictures. This method requires much manpower and an automated method would be useful. The present report describes an automatic reviewing method based on an image processing system; the system detects scene changes in the picture sequence and stores the reduced data set on a separate video tape. The resulting reduction of reviewing time by inspector is important for surveillance data with few movements

  4. Validation of the production process of core-equipment HYNIC-Bombesin-Sn

    International Nuclear Information System (INIS)

    Rubio C, N. I.

    2008-01-01

    The validation process is establishing documented evidence that provides a high degree of assurance that a specific process consistently will produce a product that will meet specifications and quality attributes preset and, therefore, ensures the efficiency and effectiveness of a product. The radiopharmaceutical 99m Tc-HYNlC-Bombesin is part of the gastrin-releasing peptide (GRP) analogues of bombesin that are radiolabelled with technetium 99 metastable for molecular images obtention. Is obtained from freeze-dry formulations kits (core- equipment)) and has reported a very high stability in human serum, specific binding to receptors and rapid internalization. Biodistribution data in mice showed rapid blood clearance with predominant renal excretion and specific binding to tissues with positive response to GRP receptors. According to biokinetics studies performed on patients with breast cancer, breast show a marked asymmetry with increased uptake in neoplastic breast in healthy women and the uptake of radiopharmaceuticals is symmetrical in both breasts. No reported adverse reactions. In this paper, the prospective validation core-equipment HYNlC-Bombesin-Sn, which was shown consistently that the product meets the specifications and quality, attributes to preset from the obtained from the diagnostic radiopharmaceutical third generation: 99m Tc-HYNlC-Bombesin. The process was successfully validated and thereby ensuring the efficiency and effectiveness of this agent as a preliminary diagnostic for approval to be marketed. (Author)

  5. On-line MR imaging for dose validation of abdominal radiotherapy

    NARCIS (Netherlands)

    Glitzner, M; Crijns, S P M; de Senneville, B Denis; Kontaxis, C; Prins, F M; Lagendijk, J J W; Raaymakers, B W

    2015-01-01

    For quality assurance and adaptive radiotherapy, validation of the actual delivered dose is crucial.Intrafractional anatomy changes cannot be captured satisfactorily during treatment with hitherto available imaging modalitites. Consequently, dose calculations are based on the assumption of static

  6. Microscopic validation of whole mouse micro-metastatic tumor imaging agents using cryo-imaging and sliding organ image registration

    Science.gov (United States)

    Liu, Yiqiao; Zhou, Bo; Qutaish, Mohammed; Wilson, David L.

    2016-03-01

    We created a metastasis imaging, analysis platform consisting of software and multi-spectral cryo-imaging system suitable for evaluating emerging imaging agents targeting micro-metastatic tumor. We analyzed CREKA-Gd in MRI, followed by cryo-imaging which repeatedly sectioned and tiled microscope images of the tissue block face, providing anatomical bright field and molecular fluorescence, enabling 3D microscopic imaging of the entire mouse with single metastatic cell sensitivity. To register MRI volumes to the cryo bright field reference, we used our standard mutual information, non-rigid registration which proceeded: preprocess --> affine --> B-spline non-rigid 3D registration. In this report, we created two modified approaches: mask where we registered locally over a smaller rectangular solid, and sliding organ. Briefly, in sliding organ, we segmented the organ, registered the organ and body volumes separately and combined results. Though sliding organ required manual annotation, it provided the best result as a standard to measure other registration methods. Regularization parameters for standard and mask methods were optimized in a grid search. Evaluations consisted of DICE, and visual scoring of a checkerboard display. Standard had accuracy of 2 voxels in all regions except near the kidney, where there were 5 voxels sliding. After mask and sliding organ correction, kidneys sliding were within 2 voxels, and Dice overlap increased 4%-10% in mask compared to standard. Mask generated comparable results with sliding organ and allowed a semi-automatic process.

  7. Spot restoration for GPR image post-processing

    Science.gov (United States)

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  8. Endemic Images and the Desensitization Process.

    Science.gov (United States)

    Saigh, Philip A.; Antoun, Fouad T.

    1984-01-01

    Examined the effects of endemic images on levels of anxiety and achievement of 48 high school students. Results suggested that a combination of endemic images and study skills training was as effective as desensitization plus study skills training. Includes the endemic image questionnaire. (JAC)

  9. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  10. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    Science.gov (United States)

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  11. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    Science.gov (United States)

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  12. Quaternion Fourier transforms for signal and image processing

    CERN Document Server

    Ell, Todd A; Sangwine, Stephen J

    2014-01-01

    Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.

  13. Hybrid imaging: Instrumentation and Data Processing

    Science.gov (United States)

    Cal-Gonzalez, Jacobo; Rausch, Ivo; Shiyam Sundar, Lalith K.; Lassen, Martin L.; Muzik, Otto; Moser, Ewald; Papp, Laszlo; Beyer, Thomas

    2018-05-01

    State-of-the-art patient management frequently requires the use of non-invasive imaging methods to assess the anatomy, function or molecular-biological conditions of patients or study subjects. Such imaging methods can be singular, providing either anatomical or molecular information, or they can be combined, thus, providing "anato-metabolic" information. Hybrid imaging denotes image acquisitions on systems that physically combine complementary imaging modalities for an improved diagnostic accuracy and confidence as well as for increased patient comfort. The physical combination of formerly independent imaging modalities was driven by leading innovators in the field of clinical research and benefited from technological advances that permitted the operation of PET and MR in close physical proximity, for example. This review covers milestones of the development of various hybrid imaging systems for use in clinical practice and small-animal research. Special attention is given to technological advances that helped the adoption of hybrid imaging, as well as to introducing methodological concepts that benefit from the availability of complementary anatomical and biological information, such as new types of image reconstruction and data correction schemes. The ultimate goal of hybrid imaging is to provide useful, complementary and quantitative information during patient work-up. Hybrid imaging also opens the door to multi-parametric assessment of diseases, which will help us better understand the causes of various diseases that currently contribute to a large fraction of healthcare costs.

  14. Hybrid Imaging: Instrumentation and Data Processing

    Directory of Open Access Journals (Sweden)

    Jacobo Cal-Gonzalez

    2018-05-01

    Full Text Available State-of-the-art patient management frequently requires the use of non-invasive imaging methods to assess the anatomy, function or molecular-biological conditions of patients or study subjects. Such imaging methods can be singular, providing either anatomical or molecular information, or they can be combined, thus, providing “anato-metabolic” information. Hybrid imaging denotes image acquisitions on systems that physically combine complementary imaging modalities for an improved diagnostic accuracy and confidence as well as for increased patient comfort. The physical combination of formerly independent imaging modalities was driven by leading innovators in the field of clinical research and benefited from technological advances that permitted the operation of PET and MR in close physical proximity, for example. This review covers milestones of the development of various hybrid imaging systems for use in clinical practice and small-animal research. Special attention is given to technological advances that helped the adoption of hybrid imaging, as well as to introducing methodological concepts that benefit from the availability of complementary anatomical and biological information, such as new types of image reconstruction and data correction schemes. The ultimate goal of hybrid imaging is to provide useful, complementary and quantitative information during patient work-up. Hybrid imaging also opens the door to multi-parametric assessment of diseases, which will help us better understand the causes of various diseases that currently contribute to a large fraction of healthcare costs.

  15. Validation of a new imaging device for telemedical ulcer monitoring

    DEFF Research Database (Denmark)

    Rasmussen, Benjamin Schnack; Frøkjær, Johnny; Bisgaard Jørgensen, Line

    2015-01-01

    between the new portable camera and the iPhone images vs. clinical assessment as the 'gold standard'. The study included 36 foot ulcers. Four specialists rated the ulcers and filled out a questionnaire, which formed the basis of the evaluation. RESULTS: We found fair to very good intra-rater agreement...... for the new PID and iPhone, respectively. The gold standard was evaluated by assessing the ulcer twice by two different specialists. Kappa values were moderate to very good with respect to inter-rater agreement except for two variables. The agreement between standard and new equipment compared to the gold......PURPOSE: To clarify whether a new portable imaging device (PID) providing 3D images for telemedical use constitutes a more correct expression of the clinical situation compared to standard telemedical equipment in this case iPhone 4s. METHOD: We investigated intra- and interindividual variability...

  16. Comparative study of image restoration techniques in forensic image processing

    Science.gov (United States)

    Bijhold, Jurrien; Kuijper, Arjan; Westhuis, Jaap-Harm

    1997-02-01

    In this work we investigated the forensic applicability of some state-of-the-art image restoration techniques for digitized video-images and photographs: classical Wiener filtering, constrained maximum entropy, and some variants of constrained minimum total variation. Basic concepts and experimental results are discussed. Because all methods appeared to produce different results, a discussion is given of which method is the most suitable, depending on the image objects that are questioned, prior knowledge and type of blur and noise. Constrained minimum total variation methods produced the best results for test images with simulated noise and blur. In cases where images are the most substantial part of the evidence, constrained maximum entropy might be more suitable, because its theoretical basis predicts a restoration result that shows the most likely pixel values, given all the prior knowledge used during restoration.

  17. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan

    2015-04-01

    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  18. Verification and Validation in a Rapid Software Development Process

    Science.gov (United States)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  19. Verification and validation process for the safety software in KNICS

    International Nuclear Information System (INIS)

    Kwon, Kee-Choon; Lee, Jang-Soo; Kim, Jang-Yeol

    2004-01-01

    This paper describes the Verification and Validation (V and V ) process for safety software of Programmable Logic Controller (PLC), Digital Reactor Protection System (DRPS), and Engineered Safety Feature-Component Control System (ESF-CCS) that are being developed in Korea Nuclear Instrumentation and Control System (KNICS) projects. Specifically, it presents DRPS V and V experience according to the software development life cycle. The main activities of DRPS V and V process are preparation of software planning documentation, verification of Software Requirement Specification (SRS), Software Design Specification (SDS) and codes, and testing of the integrated software and the integrated system. In addition, they include software safety analysis and software configuration management. SRS V and V of DRPS are technical evaluation, licensing suitability evaluation, inspection and traceability analysis, formal verification, preparing integrated system test plan, software safety analysis, and software configuration management. Also, SDS V and V of RPS are technical evaluation, licensing suitability evaluation, inspection and traceability analysis, formal verification, preparing integrated software test plan, software safety analysis, and software configuration management. The code V and V of DRPS are traceability analysis, source code inspection, test case and test procedure generation, software safety analysis, and software configuration management. Testing is the major V and V activity of software integration and system integration phase. Software safety analysis at SRS phase uses Hazard Operability (HAZOP) method, at SDS phase it uses HAZOP and Fault Tree Analysis (FTA), and at implementation phase it uses FTA. Finally, software configuration management is performed using Nu-SCM (Nuclear Software Configuration Management) tool developed by KNICS project. Through these activities, we believe we can achieve the functionality, performance, reliability and safety that are V

  20. Fragmentation Point Detection of JPEG Images at DHT Using Validator

    Science.gov (United States)

    Mohamad, Kamaruddin Malik; Deris, Mustafa Mat

    File carving is an important, practical technique for data recovery in digital forensics investigation and is particularly useful when filesystem metadata is unavailable or damaged. The research on reassembly of JPEG files with RST markers, fragmented within the scan area have been done before. However, fragmentation within Define Huffman Table (DHT) segment is yet to be resolved. This paper analyzes the fragmentation within the DHT area and list out all the fragmentation possibilities. Two main contributions are made in this paper. Firstly, three fragmentation points within DHT area are listed. Secondly, few novel validators are proposed to detect these fragmentations. The result obtained from tests done on manually fragmented JPEG files, showed that all three fragmentation points within DHT are successfully detected using validators.

  1. Current status on image processing in medical fields in Japan

    International Nuclear Information System (INIS)

    Atsumi, Kazuhiko

    1979-01-01

    Information on medical images are classified in the two patterns. 1) off-line images on films-x-ray films, cell image, chromosome image etc. 2) on-line images detected through sensors, RI image, ultrasonic image, thermogram etc. These images are divided into three characteristic, two dimensional three dimensional and dynamic images. The research on medical image processing have been reported in several meeting in Japan and many fields on images have been studied on RI, thermogram, x-ray film, x-ray-TV image, cancer cell, blood cell, bacteria, chromosome, ultrasonics, and vascular image. Processing on TI image useful and easy because of their digital displays. Software on smoothing, restoration (iterative approximation), fourier transformation, differentiation and subtration. Image on stomach and chest x-ray films have been processed automatically utilizing computer system. Computed Tomography apparatuses have been already developed in Japan and automated screening instruments on cancer cells and recently on blood cells classification have been also developed. Acoustical holography imaging and moire topography have been also studied in Japan. (author)

  2. Reliability and validity of food portion size estimation from images using manual flexible digital virtual meshes

    Science.gov (United States)

    The eButton takes frontal images at 4 second intervals throughout the day. A three-dimensional (3D) manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional (2D) images. This paper reports a test of the interrater reliability and validity of use...

  3. Fake currency detection using image processing

    Science.gov (United States)

    Agasti, Tushar; Burand, Gajanan; Wade, Pratik; Chitra, P.

    2017-11-01

    The advancement of color printing technology has increased the rate of fake currency note printing and duplicating the notes on a very large scale. Few years back, the printing could be done in a print house, but now anyone can print a currency note with maximum accuracy using a simple laser printer. As a result the issue of fake notes instead of the genuine ones has been increased very largely. India has been unfortunately cursed with the problems like corruption and black money. And counterfeit of currency notes is also a big problem to it. This leads to design of a system that detects the fake currency note in a less time and in a more efficient manner. The proposed system gives an approach to verify the Indian currency notes. Verification of currency note is done by the concepts of image processing. This article describes extraction of various features of Indian currency notes. MATLAB software is used to extract the features of the note. The proposed system has got advantages like simplicity and high performance speed. The result will predict whether the currency note is fake or not.

  4. Viewpoints on Medical Image Processing: From Science to Application.

    Science.gov (United States)

    Deserno Né Lehmann, Thomas M; Handels, Heinz; Maier-Hein Né Fritzsche, Klaus H; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-05-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.

  5. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  6. Viewpoints on Medical Image Processing: From Science to Application

    Science.gov (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  7. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David; Gereige, Issam; Gourgon, Cé cile

    2013-01-01

    patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications

  8. Eye gazing direction inspection based on image processing technique

    Science.gov (United States)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  9. Image processing techniques for digital orthophotoquad production

    Science.gov (United States)

    Hood, Joy J.; Ladner, L. J.; Champion, Richard A.

    1989-01-01

    Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.

  10. Numerical methods in image processing for applications in jewellery industry

    OpenAIRE

    Petrla, Martin

    2016-01-01

    Presented thesis deals with a problem from the field of image processing for application in multiple scanning of jewelery stones. The aim is to develop a method for preprocessing and subsequent mathematical registration of images in order to increase the effectivity and reliability of the output quality control. For these purposes the thesis summerizes mathematical definition of digital image as well as theoretical base of image registration. It proposes a method adjusting every single image ...

  11. Advances in the Application of Image Processing Fruit Grading

    OpenAIRE

    Fang , Chengjun; Hua , Chunjian

    2013-01-01

    International audience; In the perspective of actual production, the paper presents the advances in the application of image processing fruit grading from several aspects, such as processing precision and processing speed of image processing technology. Furthermore, the different algorithms about detecting size, shape, color and defects are combined effectively to reduce the complexity of each algorithm and achieve a balance between the processing precision and processing speed are keys to au...

  12. Evaluation of processing methods for static radioisotope scan images

    International Nuclear Information System (INIS)

    Oakberg, J.A.

    1976-12-01

    Radioisotope scanning in the field of nuclear medicine provides a method for the mapping of a radioactive drug in the human body to produce maps (images) which prove useful in detecting abnormalities in vital organs. At best, radioisotope scanning methods produce images with poor counting statistics. One solution to improving the body scan images is using dedicated small computers with appropriate software to process the scan data. Eleven methods for processing image data are compared

  13. Digital image processing in NDT : Application to industrial radiography

    International Nuclear Information System (INIS)

    Aguirre, J.; Gonzales, C.; Pereira, D.

    1988-01-01

    Digital image processing techniques are applied to image enhancement discontinuity detection and characterization is radiographic test. Processing is performed mainly by image histogram modification, edge enhancement, texture and user interactive segmentation. Implementation was achieved in a microcomputer with video image capture system. Results are compared with those obtained through more specialized equipment main frame computers and high precision mechanical scanning digitisers. Procedures are intended as a precious stage for automatic defect detection

  14. An integral design strategy combining optical system and image processing to obtain high resolution images

    Science.gov (United States)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  15. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  16. ASAP (Automatic Software for ASL Processing): A toolbox for processing Arterial Spin Labeling images.

    Science.gov (United States)

    Mato Abad, Virginia; García-Polo, Pablo; O'Daly, Owen; Hernández-Tamames, Juan Antonio; Zelaya, Fernando

    2016-04-01

    The method of Arterial Spin Labeling (ASL) has experienced a significant rise in its application to functional imaging, since it is the only technique capable of measuring blood perfusion in a truly non-invasive manner. Currently, there are no commercial packages for processing ASL data and there is no recognized standard for normalizing ASL data to a common frame of reference. This work describes a new Automated Software for ASL Processing (ASAP) that can automatically process several ASL datasets. ASAP includes functions for all stages of image pre-processing: quantification, skull-stripping, co-registration, partial volume correction and normalization. To assess the applicability and validity of the toolbox, this work shows its application in the study of hypoperfusion in a sample of healthy subjects at risk of progressing to Alzheimer's disease. ASAP requires limited user intervention, minimizing the possibility of random and systematic errors, and produces cerebral blood flow maps that are ready for statistical group analysis. The software is easy to operate and results in excellent quality of spatial normalization. The results found in this evaluation study are consistent with previous studies that find decreased perfusion in Alzheimer's patients in similar regions and demonstrate the applicability of ASAP. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Classification in hyperspectral images by independent component analysis, segmented cross-validation and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    Beatriz Galindo-Prieto

    2018-02-01

    Full Text Available Independent component analysis combined with various strategies for cross-validation, uncertainty estimates by jack-knifing and critical Hotelling’s T2 limits estimation, proposed in this paper, is used for classification purposes in hyperspectral images. To the best of our knowledge, the combined approach of methods used in this paper has not been previously applied to hyperspectral imaging analysis for interpretation and classification in the literature. The data analysis performed here aims to distinguish between four different types of plastics, some of them containing brominated flame retardants, from their near infrared hyperspectral images. The results showed that the method approach used here can be successfully used for unsupervised classification. A comparison of validation approaches, especially leave-one-out cross-validation and regions of interest scheme validation is also evaluated.

  18. Creation and Validation of the Self-esteem/Self-image Female Sexuality (SESIFS) Questionnaire

    OpenAIRE

    Lordello, Maria CO; Ambrogini, Carolina C; Fanganiello, Ana L; Embiru?u, Teresa R; Zaneti, Marina M; Veloso, Laise; Piccirillo, Livia B; Crude, Bianca L; Haidar, Mauro; Silva, Ivaldo

    2014-01-01

    Introduction Self-esteem and self-image are psychological aspects that affect sexual function. AIMS To validate a new measurement tool that correlates the concepts of self-esteem, self-image, and sexuality. Methods A 20-question test (the self-esteem/self-image female sexuality [SESIFS] questionnaire) was created and tested on 208 women. Participants answered: Rosenberg's self-esteem scale, the female sexual quotient (FSQ), and the SESIFS questionnaire. Pearson's correlation coefficient was u...

  19. Early differential processing of material images: Evidence from ERP classification.

    Science.gov (United States)

    Wiebel, Christiane B; Valsecchi, Matteo; Gegenfurtner, Karl R

    2014-06-24

    Investigating the temporal dynamics of natural image processing using event-related potentials (ERPs) has a long tradition in object recognition research. In a classical Go-NoGo task two characteristic effects have been emphasized: an early task independent category effect and a later task-dependent target effect. Here, we set out to use this well-established Go-NoGo paradigm to study the time course of material categorization. Material perception has gained more and more interest over the years as its importance in natural viewing conditions has been ignored for a long time. In addition to analyzing standard ERPs, we conducted a single trial ERP pattern analysis. To validate this procedure, we also measured ERPs in two object categories (people and animals). Our linear classification procedure was able to largely capture the overall pattern of results from the canonical analysis of the ERPs and even extend it. We replicate the known target effect (differential Go-NoGo potential at frontal sites) for the material images. Furthermore, we observe task-independent differential activity between the two material categories as early as 140 ms after stimulus onset. Using our linear classification approach, we show that material categories can be differentiated consistently based on the ERP pattern in single trials around 100 ms after stimulus onset, independent of the target-related status. This strengthens the idea of early differential visual processing of material categories independent of the task, probably due to differences in low-level image properties and suggests pattern classification of ERP topographies as a strong instrument for investigating electrophysiological brain activity. © 2014 ARVO.

  20. Effects of image processing on the detective quantum efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na [Yonsei University, Wonju (Korea, Republic of)

    2010-02-15

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  1. Effects of image processing on the detective quantum efficiency

    International Nuclear Information System (INIS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-01-01

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  2. Detection and characterization of exercise induced muscle damage (EIMD) via thermography and image processing

    DEFF Research Database (Denmark)

    Avdelidis, Nicolas; Kappatos, Vassilios; Georgoulas, George

    2017-01-01

    of commonly measurement tools and methods. Thermography has been used successfully as a research detection tool in medicine for the last 6 decades but very limited work has been reported on EIMD area. The main purpose of this research is to assess and characterize EIMD, using thermography and image processing...... techniques. The first step towards that goal is to develop a reliable segmentation technique to isolate the region of interest (ROI). A semi-automatic image processing software was designed and regions of the left and right leg based on superpixels were segmented. The image is segmented into a number...... of regions and the user is able to intervene providing the regions which belong to each of the two legs. In order to validate the image processing software, an extensive experimental investigation was carried out, acquiring thermographic images of the rectus femoris muscle before, immediately post and 24, 48...

  3. Statistical and heuristic image noise extraction (SHINE): a new method for processing Poisson noise in scintigraphic images

    International Nuclear Information System (INIS)

    Hannequin, Pascal; Mas, Jacky

    2002-01-01

    Poisson noise is one of the factors degrading scintigraphic images, especially at low count level, due to the statistical nature of photon detection. We have developed an original procedure, named statistical and heuristic image noise extraction (SHINE), to reduce the Poisson noise contained in the scintigraphic images, preserving the resolution, the contrast and the texture. The SHINE procedure consists in dividing the image into 4 x 4 blocks and performing a correspondence analysis on these blocks. Each block is then reconstructed using its own significant factors which are selected using an original statistical variance test. The SHINE procedure has been validated using a line numerical phantom and a hot spots and cold spots real phantom. The reference images are the noise-free simulated images for the numerical phantom and an extremely high counts image for the real phantom. The SHINE procedure has then been applied to the Jaszczak phantom and clinical data including planar bone scintigraphy, planar Sestamibi scintigraphy and Tl-201 myocardial SPECT. The SHINE procedure reduces the mean normalized error between the noisy images and the corresponding reference images. This reduction is constant and does not change with the count level. The SNR in a SHINE processed image is close to that of the corresponding raw image with twice the number of counts. The visual results with the Jaszczak phantom SPECT have shown that SHINE preserves the contrast and the resolution of the slices well. Clinical examples have shown no visual difference between the SHINE images and the corresponding raw images obtained with twice the acquisition duration. SHINE is an entirely automatic procedure which enables halving the acquisition time or the injected dose in scintigraphic acquisitions. It can be applied to all scintigraphic images, including PET data, and to all low-count photon images

  4. Signal Processing in Medical Ultrasound B-mode Imaging

    International Nuclear Information System (INIS)

    Song, Tai Kyong

    2000-01-01

    Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing

  5. An invertebrate embryologist's guide to routine processing of confocal images.

    Science.gov (United States)

    von Dassow, George

    2014-01-01

    It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.

  6. The Study of Image Processing Method for AIDS PA Test

    International Nuclear Information System (INIS)

    Zhang, H J; Wang, Q G

    2006-01-01

    At present, the main test technique of AIDS is PA in China. Because the judgment of PA test image is still depending on operator, the error ration is high. To resolve this problem, we present a new technique of image processing, which first process many samples and get the data including coordinate of center and the rang of kinds images; then we can segment the image with the data; at last, the result is exported after data was judgment. This technique is simple and veracious; and it also turns out to be suitable for the processing and analyzing of other infectious diseases' PA test image

  7. High angular resolution diffusion imaging : processing & visualization

    NARCIS (Netherlands)

    Prckovska, V.

    2010-01-01

    Diffusion tensor imaging (DTI) is a recent magnetic resonance imaging (MRI) technique that can map the orientation architecture of neural tissues in a completely non-invasive way by measuring the directional specificity (anisotropy) of the local water diffusion. However, in areas of complex fiber

  8. Pedestrian detection in thermal images: An automated scale based region extraction with curvelet space validation

    Science.gov (United States)

    Lakshmi, A.; Faheema, A. G. J.; Deodhare, Dipti

    2016-05-01

    Pedestrian detection is a key problem in night vision processing with a dozen of applications that will positively impact the performance of autonomous systems. Despite significant progress, our study shows that performance of state-of-the-art thermal image pedestrian detectors still has much room for improvement. The purpose of this paper is to overcome the challenge faced by the thermal image pedestrian detectors, which employ intensity based Region Of Interest (ROI) extraction followed by feature based validation. The most striking disadvantage faced by the first module, ROI extraction, is the failed detection of cloth insulted parts. To overcome this setback, this paper employs an algorithm and a principle of region growing pursuit tuned to the scale of the pedestrian. The statistics subtended by the pedestrian drastically vary with the scale and deviation from normality approach facilitates scale detection. Further, the paper offers an adaptive mathematical threshold to resolve the problem of subtracting the background while extracting cloth insulated parts as well. The inherent false positives of the ROI extraction module are limited by the choice of good features in pedestrian validation step. One such feature is curvelet feature, which has found its use extensively in optical images, but has as yet no reported results in thermal images. This has been used to arrive at a pedestrian detector with a reduced false positive rate. This work is the first venture made to scrutinize the utility of curvelet for characterizing pedestrians in thermal images. Attempt has also been made to improve the speed of curvelet transform computation. The classification task is realized through the use of the well known methodology of Support Vector Machines (SVMs). The proposed method is substantiated with qualified evaluation methodologies that permits us to carry out probing and informative comparisons across state-of-the-art features, including deep learning methods, with six

  9. Medical image processing on the GPU - past, present and future.

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  11. Image Segmentation and Processing for Efficient Parking Space Analysis

    OpenAIRE

    Tutika, Chetan Sai; Vallapaneni, Charan; R, Karthik; KP, Bharath; Muthu, N Ruban Rajesh Kumar

    2018-01-01

    In this paper, we develop a method to detect vacant parking spaces in an environment with unclear segments and contours with the help of MATLAB image processing capabilities. Due to the anomalies present in the parking spaces, such as uneven illumination, distorted slot lines and overlapping of cars. The present-day conventional algorithms have difficulties processing the image for accurate results. The algorithm proposed uses a combination of image pre-processing and false contour detection ...

  12. PARAGON-IPS: A Portable Imaging Software System For Multiple Generations Of Image Processing Hardware

    Science.gov (United States)

    Montelione, John

    1989-07-01

    Paragon-IPS is a comprehensive software system which is available on virtually all generations of image processing hardware. It is designed for an image processing department or a scientist and engineer who is doing image processing full-time. It is being used by leading R&D labs in government agencies and Fortune 500 companies. Applications include reconnaissance, non-destructive testing, remote sensing, medical imaging, etc.

  13. FLAME MONITORING IN POWER STATION BOILERS USING IMAGE PROCESSING

    Directory of Open Access Journals (Sweden)

    K. Sujatha

    2012-05-01

    Full Text Available Combustion quality in power station boilers plays an important role in minimizing the flue gas emissions. In the present work various intelligent schemes to infer the flue gas emissions by monitoring the flame colour at the furnace of the boiler are proposed here. Flame image monitoring involves capturing the flame video over a period of time with the measurement of various parameters like Carbon dioxide (CO2, excess oxygen (O2, Nitrogen dioxide (NOx, Sulphur dioxide (SOx and Carbon monoxide (CO emissions plus the flame temperature at the core of the fire ball, air/fuel ratio and the combustion quality. Higher the quality of combustion less will be the flue gases at the exhaust. The flame video was captured using an infrared camera. The flame video is then split up into the frames for further analysis. The video splitter is used for progressive extraction of the flame images from the video. The images of the flame are then pre-processed to reduce noise. The conventional classification and clustering techniques include the Euclidean distance classifier (L2 norm classifier. The intelligent classifier includes the Radial Basis Function Network (RBF, Back Propagation Algorithm (BPA and parallel architecture with RBF and BPA (PRBFBPA. The results of the validation are supported with the above mentioned performance measures whose values are in the optimal range. The values of the temperatures, combustion quality, SOx, NOx, CO, CO2 concentrations, air and fuel supplied corresponding to the images were obtained thereby indicating the necessary control action taken to increase or decrease the air supply so as to ensure complete combustion. In this work, by continuously monitoring the flame images, combustion quality was inferred (complete/partial/incomplete combustion and the air/fuel ratio can be automatically varied. Moreover in the existing set-up, measurements like NOx, CO and CO2 are inferred from the samples that are collected periodically or by

  14. Validation of MODIS snow cover images over Austria

    Directory of Open Access Journals (Sweden)

    J. Parajka

    2006-01-01

    Full Text Available This study evaluates the Moderate Resolution Imaging Spectroradiometer (MODIS snow cover product over the territory of Austria. The aims are (a to analyse the spatial and temporal variability of the MODIS snow product classes, (b to examine the accuracy of the MODIS snow product against in situ snow depth data, and (c to identify the main factors that may influence the MODIS classification accuracy. We use daily MODIS grid maps (version 4 and daily snow depth measurements at 754 climate stations in the period from February 2000 to December 2005. The results indicate that, on average, clouds obscured 63% of Austria, which may significantly restrict the applicability of the MODIS snow cover images to hydrological modelling. On cloud-free days, however, the classification accuracy is very good with an average of 95%. There is no consistent relationship between the classification errors and dominant land cover type and local topographical variability but there are clear seasonal patterns to the errors. In December and January the errors are around 15% while in summer they are less than 1%. This seasonal pattern is related to the overall percentage of snow cover in Austria, although in spring, when there is a well developed snow pack, errors tend to be smaller than they are in early winter for the same overall percent snow cover. Overestimation and underestimation errors balance during most of the year which indicates little bias. In November and December, however, there appears to exist a tendency for overestimation. Part of the errors may be related to the temporal shift between the in situ snow depth measurements (07:00 a.m. and the MODIS acquisition time (early afternoon. The comparison of daily air temperature maps with MODIS snow cover images indicates that almost all MODIS overestimation errors are caused by the misclassification of cirrus clouds as snow.

  15. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  16. Precooking as a Control for Histamine Formation during the Processing of Tuna: An Industrial Process Validation.

    Science.gov (United States)

    Adams, Farzana; Nolte, Fred; Colton, James; De Beer, John; Weddig, Lisa

    2018-02-23

    An experiment to validate the precooking of tuna as a control for histamine formation was carried out at a commercial tuna factory in Fiji. Albacore tuna ( Thunnus alalunga) were brought on board long-line catcher vessels alive, immediately chilled but never frozen, and delivered to an on-shore facility within 3 to 13 days. These fish were then allowed to spoil at 25 to 30°C for 21 to 25 h to induce high levels of histamine (>50 ppm), as a simulation of "worst-case" postharvest conditions, and subsequently frozen. These spoiled fish later were thawed normally and then precooked at a commercial tuna processing facility to a target maximum core temperature of 60°C. These tuna were then held at ambient temperatures of 19 to 37°C for up to 30 h, and samples were collected every 6 h for histamine analysis. After precooking, no further histamine formation was observed for 12 to 18 h, indicating that a conservative minimum core temperature of 60°C pauses subsequent histamine formation for 12 to 18 h. Using the maximum core temperature of 60°C provided a challenge study to validate a recommended minimum core temperature of 60°C, and 12 to 18 h was sufficient to convert precooked tuna into frozen loins or canned tuna. This industrial-scale process validation study provides support at a high confidence level for the preventive histamine control associated with precooking. This study was conducted with tuna deliberately allowed to spoil to induce high concentrations of histamine and histamine-forming capacity and to fail standard organoleptic evaluations, and the critical limits for precooking were validated. Thus, these limits can be used in a hazard analysis critical control point plan in which precooking is identified as a critical control point.

  17. Risk perception and information processing: the development and validation of a questionnaire to assess self-reported information processing.

    Science.gov (United States)

    Smerecnik, Chris M R; Mesters, Ilse; Candel, Math J J M; De Vries, Hein; De Vries, Nanne K

    2012-01-01

    The role of information processing in understanding people's responses to risk information has recently received substantial attention. One limitation of this research concerns the unavailability of a validated questionnaire of information processing. This article presents two studies in which we describe the development and validation of the Information-Processing Questionnaire to meet that need. Study 1 describes the development and initial validation of the questionnaire. Participants were randomized to either a systematic processing or a heuristic processing condition after which they completed a manipulation check and the initial 15-item questionnaire and again two weeks later. The questionnaire was subjected to factor reliability and validity analyses on both measurement times for purposes of cross-validation of the results. A two-factor solution was observed representing a systematic processing and a heuristic processing subscale. The resulting scale showed good reliability and validity, with the systematic condition scoring significantly higher on the systematic subscale and the heuristic processing condition significantly higher on the heuristic subscale. Study 2 sought to further validate the questionnaire in a field study. Results of the second study corresponded with those of Study 1 and provided further evidence of the validity of the Information-Processing Questionnaire. The availability of this information-processing scale will be a valuable asset for future research and may provide researchers with new research opportunities. © 2011 Society for Risk Analysis.

  18. Algorithms of image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Oliveira, V.A.

    1990-01-01

    The problem of image restoration from noisy measurements as encountered in Nuclear Medicine is considered. A new approach for treating the measurements wherein they are represented by a spatial noncausal interaction model prior to maximum entropy restoration is given. This model describes the statistical dependence among the image values and their neighbourhood. The particular application of the algorithms presented here relates to gamma ray imaging systems, and is aimed at improving the resolution-noise suppression product. Results for actual gamma camera data are presented and compared with more conventional techniques. (author)

  19. Experimental validation of incomplete data CT image reconstruction techniques

    International Nuclear Information System (INIS)

    Eberhard, J.W.; Hsiao, M.L.; Tam, K.C.

    1989-01-01

    X-ray CT inspection of large metal parts is often limited by x-ray penetration problems along many of the ray paths required for a complete CT data set. In addition, because of the complex geometry of many industrial parts, manipulation difficulties often prevent scanning over some range of angles. CT images reconstructed from these incomplete data sets contain a variety of artifacts which limit their usefulness in part quality determination. Over the past several years, the authors' company has developed 2 new methods of incorporating a priori information about the parts under inspection to significantly improve incomplete data CT image quality. This work reviews the methods which were developed and presents experimental results which confirm the effectiveness of the techniques. The new methods for dealing with incomplete CT data sets rely on a priori information from part blueprints (in electronic form), outer boundary information from touch sensors, estimates of part outer boundaries from available x-ray data, and linear x-ray attenuation coefficients of the part. The two methods make use of this information in different fashions. The relative performance of the two methods in detecting various flaw types is compared. Methods for accurately registering a priori information with x-ray data are also described. These results are critical to a new industrial x-ray inspection cell built for inspection of large aircraft engine parts

  20. PROCESSING, CATALOGUING AND DISTRIBUTION OF UAS IMAGES IN NEAR REAL TIME

    Directory of Open Access Journals (Sweden)

    I. Runkel

    2013-08-01

    Full Text Available Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications – where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security

  1. Processing, Cataloguing and Distribution of Uas Images in Near Real Time

    Science.gov (United States)

    Runkel, I.

    2013-08-01

    Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images

  2. The method validation step of biological dosimetry accreditation process

    International Nuclear Information System (INIS)

    Roy, L.; Voisin, P.A.; Guillou, A.C.; Busset, A.; Gregoire, E.; Buard, V.; Delbos, M.; Voisin, Ph.

    2006-01-01

    One of the missions of the Laboratory of Biological Dosimetry (L.D.B.) of the Institute for Radiation and Nuclear Safety (I.R.S.N.) is to assess the radiological dose after an accidental overexposure suspicion to ionising radiation, by using radio-induced changes of some biological parameters. The 'gold standard' is the yield of dicentrics observed in patients lymphocytes, and this yield is converted in dose using dose effect relationships. This method is complementary to clinical and physical dosimetry, for medical team in charge of the patients. To obtain a formal recognition of its operational activity, the laboratory decided three years ago, to require an accreditation, by following the recommendations of both 17025 General Requirements for the Competence of Testing and Calibration Laboratories and 19238 Performance criteria for service laboratories performing biological dosimetry by cyto-genetics. Diagnostics, risks analysis were realized to control the whole analysis process leading to documents writing. Purchases, personnel department, vocational training were also included in the quality system. Audits were very helpful to improve the quality system. One specificity of this technique is that it is not normalized therefore apart from quality management aspects, several technical points needed some validations. An inventory of potentially influent factors was carried out. To estimate their real effect on the yield of dicentrics, a Placket-Burman experimental design was conducted. The effect of seven parameters was tested: the BUdr (bromodeoxyuridine), PHA (phytohemagglutinin) and colcemid concentration, the culture duration, the incubator temperature, the blood volume and the medium volume. The chosen values were calculated according to the uncertainties on the way they were measured i.e. pipettes, thermometers, test tubes. None of the factors has a significant impact on the yield of dicentrics. Therefore the uncertainty linked to their use was considered as

  3. The method validation step of biological dosimetry accreditation process

    Energy Technology Data Exchange (ETDEWEB)

    Roy, L.; Voisin, P.A.; Guillou, A.C.; Busset, A.; Gregoire, E.; Buard, V.; Delbos, M.; Voisin, Ph. [Institut de Radioprotection et de Surete Nucleaire, LDB, 92 - Fontenay aux Roses (France)

    2006-07-01

    One of the missions of the Laboratory of Biological Dosimetry (L.D.B.) of the Institute for Radiation and Nuclear Safety (I.R.S.N.) is to assess the radiological dose after an accidental overexposure suspicion to ionising radiation, by using radio-induced changes of some biological parameters. The 'gold standard' is the yield of dicentrics observed in patients lymphocytes, and this yield is converted in dose using dose effect relationships. This method is complementary to clinical and physical dosimetry, for medical team in charge of the patients. To obtain a formal recognition of its operational activity, the laboratory decided three years ago, to require an accreditation, by following the recommendations of both 17025 General Requirements for the Competence of Testing and Calibration Laboratories and 19238 Performance criteria for service laboratories performing biological dosimetry by cyto-genetics. Diagnostics, risks analysis were realized to control the whole analysis process leading to documents writing. Purchases, personnel department, vocational training were also included in the quality system. Audits were very helpful to improve the quality system. One specificity of this technique is that it is not normalized therefore apart from quality management aspects, several technical points needed some validations. An inventory of potentially influent factors was carried out. To estimate their real effect on the yield of dicentrics, a Placket-Burman experimental design was conducted. The effect of seven parameters was tested: the BUdr (bromodeoxyuridine), PHA (phytohemagglutinin) and colcemid concentration, the culture duration, the incubator temperature, the blood volume and the medium volume. The chosen values were calculated according to the uncertainties on the way they were measured i.e. pipettes, thermometers, test tubes. None of the factors has a significant impact on the yield of dicentrics. Therefore the uncertainty linked to their use was

  4. High-performance method of morphological medical image processing

    Directory of Open Access Journals (Sweden)

    Ryabykh M. S.

    2016-07-01

    Full Text Available the article shows the implementation of grayscale morphology vHGW algorithm for selection borders in the medical image. Image processing is executed using OpenMP and NVIDIA CUDA technology for images with different resolution and different size of the structuring element.

  5. Measurement and Image Processing Techniques for Particle Image Velocimetry Using Solid-Phase Carbon Dioxide

    Science.gov (United States)

    2014-03-27

    stereoscopic PIV: the angular displacement configuration and the translation configuration. The angular displacement configuration is most commonly used today...images were processed using ImageJ, an open-source, Java -based image processing software available from the National Institute of Health (NIH). The

  6. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  7. Use of personal computer image for processing a magnetic resonance image (MRI)

    International Nuclear Information System (INIS)

    Yamamoto, Tetsuo; Tanaka, Hitoshi

    1988-01-01

    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  8. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  9. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  10. GATE simulation of a LYSO-based SPECT imager: Validation and detector optimization

    International Nuclear Information System (INIS)

    Li, Suying; Zhang, Qiushi; Xie, Zhaoheng; Liu, Qi; Xu, Baixuan; Yang, Kun; Li, Changhui; Ren, Qiushi

    2015-01-01

    This paper presents a small animal SPECT system that is based on cerium doped lutetium–yttrium oxyorthosilicate (LYSO) scintillation crystal, position sensitive photomultiplier tubes (PSPMTs) and parallel hole collimator. Spatial resolution test and animal experiment were performed to demonstrate the imaging performance of the detector. Preliminary results indicated a spatial resolution of 2.5 mm at FWHM that cannot meet our design requirement. Therefore, we simulated this gamma camera using GATE (GEANT 4 Application for Tomographic Emission) aiming to make detector spatial resolution less than 2 mm. First, the GATE simulation process was validated through comparison between simulated and experimental data. This also indicates the accuracy and effectiveness of GATE simulation for LYSO-based gamma camera. Then the different detector sampling methods (crystal size with 1.5, and 1 mm) and collimator design (collimator height with 30, 34.8, 38, and 43 mm) were studied to figure out an optimized parameter set. Detector sensitivity changes were also focused on with different parameters set that generated different spatial resolution results. Tradeoff curves of spatial resolution and sensitivity were plotted to determine the optimal collimator height with different sampling methods. Simulation results show that scintillation crystal size of 1 mm and collimator height of 38 mm, which can generate a spatial resolution of ∼1.8 mm and sensitivity of ∼0.065 cps/kBq, can be an ideal configuration for our SPECT imager design

  11. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  12. The development and validation of the Body-Image Ideals Questionnaire.

    Science.gov (United States)

    Cash, T F; Szymanski, M L

    1995-06-01

    The Body-Image Ideals Questionnaire (BIQ) was developed as a unique attitudinal body-image assessment that considers one's perceived discrepancy from and degree of investment in personal ideals on multiple physical attributes. Reliability and validity of the 20-item instrument were examined for a sample of 284 college women. The results indicated that the BIQ consists of two relatively distinct and internally consistent Discrepancy and Importance subscales, as well as their multiplicative composite. The subscales' respective convergent validities vis-à-vis extant body-image measures and specific facets of personality (i.e., public self-consciousness and perfectionism) and psychosocial adjustment (i.e., social anxiety, depression, and eating disturbance) were confirmed. Evidence also supported the incremental validity of multiple self-ideal discrepancies. Effects due to socially desirable responding were inconsequential. Directions for needed basic and clinical research were identified.

  13. Apparatus and method X-ray image processing

    International Nuclear Information System (INIS)

    1984-01-01

    The invention relates to a method for X-ray image processing. The radiation passed through the object is transformed into an electric image signal from which the logarithmic value is determined and displayed by a display device. Its main objective is to provide a method and apparatus that renders X-ray images or X-ray subtraction images with strong reduction of stray radiation. (Auth.)

  14. Multi-institutional Quantitative Evaluation and Clinical Validation of Smart Probabilistic Image Contouring Engine (SPICE) Autosegmentation of Target Structures and Normal Tissues on Computer Tomography Images in the Head and Neck, Thorax, Liver, and Male Pelvis Areas

    DEFF Research Database (Denmark)

    Zhu, Mingyao; Bzdusek, Karl; Brink, Carsten

    2013-01-01

    Clinical validation and quantitative evaluation of computed tomography (CT) image autosegmentation using Smart Probabilistic Image Contouring Engine (SPICE).......Clinical validation and quantitative evaluation of computed tomography (CT) image autosegmentation using Smart Probabilistic Image Contouring Engine (SPICE)....

  15. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego

    2017-01-01

    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  16. Embedded processor extensions for image processing

    Science.gov (United States)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  17. Statistical processing of large image sequences.

    Science.gov (United States)

    Khellah, F; Fieguth, P; Murray, M J; Allen, M

    2005-01-01

    The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.

  18. Recommendations for elaboration, transcultural adaptation and validation process of tests in Speech, Hearing and Language Pathology.

    Science.gov (United States)

    Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de

    2017-06-08

    to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.

  19. Production process validation of 2-[18F]-fluoro-2-deoxy-D-glucose

    International Nuclear Information System (INIS)

    Cantero, Miguel; Iglesias, Rocio; Aguilar, Juan; Sau, Pablo; Tardio, Evaristo; Narrillos, Marcos

    2003-01-01

    The main of validation of production process of 2-[18F]-fluoro-2-deoxi-D-glucose (FDG) was to check: A) equipment's and services implicated in the production process were correctly installed, well documented, and worked properly, and B) production of FDG was done in a repetitive way according to predefined parameters. The main document was the Validation Master Plan, and steps were: installation qualification, operation qualification, process qualification and validation report. After finalization of all tests established in qualification steps without deviations, we concluded that the production process was validated because is done in a repetitive way according predefined parameters (Au)

  20. The operation technology of realtime image processing system (Datacube)

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Lee, Yong Bum; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Park, Jin Seok

    1997-02-01

    In this project, a Sparc VME-based MaxSparc system, running the solaris operating environment, is selected as the dedicated image processing hardware for robot vision applications. In this report, the operation of Datacube maxSparc system, which is high performance realtime image processing hardware, is systematized. And image flow example programs for running MaxSparc system are studied and analyzed. The state-of-the-arts of Datacube system utilizations are studied and analyzed. For the next phase, advanced realtime image processing platform for robot vision application is going to be developed. (author). 19 refs., 71 figs., 11 tabs.

  1. Automated processing of X-ray images in medicine

    International Nuclear Information System (INIS)

    Babij, Ya.S.; B'yalyuk, Ya.O.; Yanovich, I.A.; Lysenko, A.V.

    1991-01-01

    Theoretical and practical achievements in application of computing technology means for processing of X-ray images in medicine were generalized. The scheme of the main directions and tasks of processing of X-ray images was given and analyzed. The principal problems appeared in automated processing of X-ray images were distinguished. It is shown that for interpretation of X-ray images it is expedient to introduce a notion of relative operating characteristic (ROC) of a roentgenologist. Every point on ROC curve determines the individual criteria of roentgenologist to put a positive diagnosis for definite situation

  2. The vision guidance and image processing of AGV

    Science.gov (United States)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  3. Image processing for drift compensation in fluorescence microscopy

    DEFF Research Database (Denmark)

    Petersen, Steffen; Thiagarajan, Viruthachalam; Coutinho, Isabel

    2013-01-01

    Fluorescence microscopy is characterized by low background noise, thus a fluorescent object appears as an area of high signal/noise. Thermal gradients may result in apparent motion of the object, leading to a blurred image. Here, we have developed an image processing methodology that may remove....../reduce blur significantly for any type of microscopy. A total of ~100 images were acquired with a pixel size of 30 nm. The acquisition time for each image was approximately 1second. We can quantity the drift in X and Y using the sub pixel accuracy computed centroid location of an image object in each frame....... We can measure drifts down to approximately 10 nm in size and a drift-compensated image can therefore be reconstructed on a grid of the same size using the “Shift and Add” approach leading to an image of identical size asthe individual image. We have also reconstructed the image using a 3 fold larger...

  4. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  5. Some results of automatic processing of images

    International Nuclear Information System (INIS)

    Golenishchev, I.A.; Gracheva, T.N.; Khardikov, S.V.

    1975-01-01

    The problems of automatic deciphering of the radiographic picture the purpose of which is making a conclusion concerning the quality of the inspected product on the basis of the product defect images in the picture are considered. The methods of defect image recognition are listed, and the algorithms and the class features of defects are described. The results of deciphering of a small radiographic picture by means of the ''Minsk-22'' computer are presented. It is established that the sensitivity of the method of the automatic deciphering is close to that obtained for visual deciphering

  6. Surface Distresses Detection of Pavement Based on Digital Image Processing

    OpenAIRE

    Ouyang , Aiguo; Luo , Chagen; Zhou , Chao

    2010-01-01

    International audience; Pavement crack is the main form of early diseases of pavement. The use of digital photography to record pavement images and subsequent crack detection and classification has undergone continuous improvements over the past decade. Digital image processing has been applied to detect the pavement crack for its advantages of large amount of information and automatic detection. The applications of digital image processing in pavement crack detection, distresses classificati...

  7. Bayesian image processing in two and three dimensions

    International Nuclear Information System (INIS)

    Hart, H.; Liang, Z.

    1986-01-01

    Tomographic image processing customarily analyzes data acquired over a series of projective orientations. If, however, the point source function (the matrix R) of the system is strongly depth dependent, tomographic information is also obtainable from a series of parallel planar images corresponding to different ''focal'' depths. Bayesian image processing (BIP) was carried out for two and three dimensional spatially uncorrelated discrete amplitude a priori source distributions

  8. The study of image processing of parallel digital signal processor

    International Nuclear Information System (INIS)

    Liu Jie

    2000-01-01

    The author analyzes the basic characteristic of parallel DSP (digital signal processor) TMS320C80 and proposes related optimized image algorithm and the parallel processing method based on parallel DSP. The realtime for many image processing can be achieved in this way

  9. IDAPS (Image Data Automated Processing System) System Description

    Science.gov (United States)

    1988-06-24

    This document describes the physical configuration and components used in the image processing system referred to as IDAPS (Image Data Automated ... Processing System). This system was developed by the Environmental Research Institute of Michigan (ERIM) for Eglin Air Force Base. The system is designed

  10. A fuzzy art neural network based color image processing and ...

    African Journals Online (AJOL)

    To improve the learning process from the input data, a new learning rule was suggested. In this paper, a new method is proposed to deal with the RGB color image pixels, which enables a Fuzzy ART neural network to process the RGB color images. The application of the algorithm was implemented and tested on a set of ...

  11. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  12. Digital Data Processing of Images | Lotter | South African Medical ...

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  13. An Overview of Pharmaceutical Validation and Process Controls in ...

    African Journals Online (AJOL)

    It has always been known that the processes involved in pharmaceutical production impact significantly on the quality of the products The processes include raw material and equipment inspections as well as in-process controls. Process controls are mandatory in good manufacturing practice (GMP). The purpose is to ...

  14. Validity and Reliability of Revised Inventory of Learning Processes.

    Science.gov (United States)

    Gadzella, B. M.; And Others

    The Inventory of Learning Processes (ILP) was developed by Schmeck, Ribich, and Ramanaiah in 1977 as a self-report inventory to assess learning style through a behavioral-oriented approach. The ILP was revised by Schmeck in 1983. The Revised ILP contains six scales: (1) Deep Processing; (2) Elaborative Processing; (3) Shallow Processing; (4)…

  15. Efficient morphological tools for astronomical image processing

    NARCIS (Netherlands)

    Moschini, Ugo

    2016-01-01

    Nowadays, many applications rely on a huge quantity of images at high resolution and with high quantity of information per pixel, due either to the technological improvements of the instruments or to the type of measurement observed. This thesis is focused on exploring and developing tools and new

  16. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Science.gov (United States)

    Della Mea, Vincenzo; Baroni, Giulia L; Pilutti, David; Di Loreto, Carla

    2017-01-01

    The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  17. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  18. [DESIGN AND VALIDATION OF AN IMAGE FOR DISSEMINATION AND IMPLEMENTATION OF CHILEAN DIETARY GUIDELINES].

    Science.gov (United States)

    Olivares Cortés, Sonia; Zacarías Hasbún, Isabel; González González, Carmen Gloria; Fonseca Morán, Lilian; Mediano Stoltze, Fernanda; Pinheiro Fernandes, Anna Christina; Rodríguez Osiac, Lorena

    2015-08-01

    Food-Based Dietary Guidelines (FBDG) are usually accompanied by an image for dissemination and implementation. to design and validate an image to represent the variety and proportions of the new Chilean dietary guidelines, include foods high in critical nutrients that should be avoided and physical activity guidelines. a panel of experts tested seven graphics and selected three that were validated with 12 focus groups of people aged 10-14 and 20-40 years, of both sexes, from different socioeconomic groups and from both rural and urban areas. We analyzed the perception of variety and proportions of the food groups for daily intake and motivation for action in diet and physical activity. We utilized the METAPLAN method used previously in the validation of FBDG. the final image was a circle that showed the variety and proportions of each food group for daily consumption (in pictures), included physical activity guidelines in a strip around the middle of the circle and a rectangle towards of bottom of the image with examples of foods high in critical nutrients in black and white. The chosen picture was modified using input from participants and validated with three additional focus groups, improving its understanding and acceptance. most participants understood that the image represented the relationship between healthy eating and daily physical activity, correctly identifying the food groups for which increased intake was suggested and those groups in which intake should be reduced or avoided. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  19. Creation and Validation of the Self-esteem/Self-image Female Sexuality (SESIFS) Questionnaire.

    Science.gov (United States)

    Lordello, Maria Co; Ambrogini, Carolina C; Fanganiello, Ana L; Embiruçu, Teresa R; Zaneti, Marina M; Veloso, Laise; Piccirillo, Livia B; Crude, Bianca L; Haidar, Mauro; Silva, Ivaldo

    2014-01-01

    Self-esteem and self-image are psychological aspects that affect sexual function. To validate a new measurement tool that correlates the concepts of self-esteem, self-image, and sexuality. A 20-question test (the self-esteem/self-image female sexuality [SESIFS] questionnaire) was created and tested on 208 women. Participants answered: Rosenberg's self-esteem scale, the female sexual quotient (FSQ), and the SESIFS questionnaire. Pearson's correlation coefficient was used to test concurrent validity of the SESIFS against Rosenberg's self-esteem scale and the FSQ. Reliability was tested using the Cronbach's alpha coefficient. The new questionnaire had a good overall reliability (Cronbach's alpha r = 0.862, p self-esteem domain r = 0.32, p self-esteem, self-image, and sexuality domains. A new, revised version is being tested and will be presented in an upcoming publication.

  20. Microscopic validation of whole mouse micro-metastatic tumor imaging agents using cryo-imaging and sliding organ image registration

    OpenAIRE

    Liu, Yiqiao; Zhou, Bo; Qutaish, Mohammed; Wilson, David L.

    2016-01-01

    We created a metastasis imaging, analysis platform consisting of software and multi-spectral cryo-imaging system suitable for evaluating emerging imaging agents targeting micro-metastatic tumor. We analyzed CREKA-Gd in MRI, followed by cryo-imaging which repeatedly sectioned and tiled microscope images of the tissue block face, providing anatomical bright field and molecular fluorescence, enabling 3D microscopic imaging of the entire mouse with single metastatic cell sensitivity. To register ...

  1. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Super-resolution for everybody: An image processing workflow to obtain high-resolution images with a standard confocal microscope.

    Science.gov (United States)

    Lam, France; Cladière, Damien; Guillaume, Cyndélia; Wassmann, Katja; Bolte, Susanne

    2017-02-15

    In the presented work we aimed at improving confocal imaging to obtain highest possible resolution in thick biological samples, such as the mouse oocyte. We therefore developed an image processing workflow that allows improving the lateral and axial resolution of a standard confocal microscope. Our workflow comprises refractive index matching, the optimization of microscope hardware parameters and image restoration by deconvolution. We compare two different deconvolution algorithms, evaluate the necessity of denoising and establish the optimal image restoration procedure. We validate our workflow by imaging sub resolution fluorescent beads and measuring the maximum lateral and axial resolution of the confocal system. Subsequently, we apply the parameters to the imaging and data restoration of fluorescently labelled meiotic spindles of mouse oocytes. We measure a resolution increase of approximately 2-fold in the lateral and 3-fold in the axial direction throughout a depth of 60μm. This demonstrates that with our optimized workflow we reach a resolution that is comparable to 3D-SIM-imaging, but with better depth penetration for confocal images of beads and the biological sample. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Signal validation with control-room information-processing computers

    International Nuclear Information System (INIS)

    Belblidia, L.A.; Carlson, R.W.; Russell, J.L. Jr.

    1985-01-01

    One of the 'lessons learned' from the Three Mile Island accident focuses upon the need for a validated source of plant-status information in the control room. The utilization of computer-generated graphics to display the readings of the major plant instrumentation has introduced the capability of validating signals prior to their presentation to the reactor operations staff. The current operations philosophies allow the operator a quick look at the gauges to form an impression of the fraction of full scale as the basis for knowledge of the current plant conditions. After the introduction of a computer-based information-display system such as the Safety Parameter Display System (SPDS), operational decisions can be based upon precise knowledge of the parameters that define the operation of the reactor and auxiliary systems. The principal impact of this system on the operator will be to remove the continuing concern for the validity of the instruments which provide the information that governs the operator's decisions. (author)

  4. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  5. Graphical user interface for image acquisition and processing

    Science.gov (United States)

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  6. Real-time SPARSE-SENSE cardiac cine MR imaging: optimization of image reconstruction and sequence validation.

    Science.gov (United States)

    Goebel, Juliane; Nensa, Felix; Bomas, Bettina; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai

    2016-12-01

    Improved real-time cardiac magnetic resonance (CMR) sequences have currently been introduced, but so far only limited practical experience exists. This study aimed at image reconstruction optimization and clinical validation of a new highly accelerated real-time cine SPARSE-SENSE sequence. Left ventricular (LV) short-axis stacks of a real-time free-breathing SPARSE-SENSE sequence with high spatiotemporal resolution and of a standard segmented cine SSFP sequence were acquired at 1.5 T in 11 volunteers and 15 patients. To determine the optimal iterations, all volunteers' SPARSE-SENSE images were reconstructed using 10-200 iterations, and contrast ratios, image entropies, and reconstruction times were assessed. Subsequently, the patients' SPARSE-SENSE images were reconstructed with the clinically optimal iterations. LV volumetric values were evaluated and compared between both sequences. Sufficient image quality and acceptable reconstruction times were achieved when using 80 iterations. Bland-Altman plots and Passing-Bablok regression showed good agreement for all volumetric parameters. 80 iterations are recommended for iterative SPARSE-SENSE image reconstruction in clinical routine. Real-time cine SPARSE-SENSE yielded comparable volumetric results as the current standard SSFP sequence. Due to its intrinsic low image acquisition times, real-time cine SPARSE-SENSE imaging with iterative image reconstruction seems to be an attractive alternative for LV function analysis. • A highly accelerated real-time CMR sequence using SPARSE-SENSE was evaluated. • SPARSE-SENSE allows free breathing in real-time cardiac cine imaging. • For clinically optimal SPARSE-SENSE image reconstruction, 80 iterations are recommended. • Real-time SPARSE-SENSE imaging yielded comparable volumetric results as the reference SSFP sequence. • The fast SPARSE-SENSE sequence is an attractive alternative to standard SSFP sequences.

  7. Image processing. Volumetric analysis with a digital image processing system. [GAMMA]. Bildverarbeitung. Volumetrie mittels eines digitalen Bildverarbeitungssystems

    Energy Technology Data Exchange (ETDEWEB)

    Kindler, M; Radtke, F; Demel, G

    1986-01-01

    The book is arranged in seven sections, describing various applications of volumetric analysis using image processing systems, and various methods of diagnostic evaluation of images obtained by gamma scintigraphy, cardic catheterisation, and echocardiography. A dynamic ventricular phantom is explained that has been developed for checking and calibration for safe examination of patient, the phantom allowing extensive simulation of volumetric and hemodynamic conditions of the human heart: One section discusses the program development for image processing, referring to a number of different computer systems. The equipment described includes a small non-expensive PC system, as well as a standardized nuclear medical diagnostic system, and a computer system especially suited to image processing.

  8. Acquisition and Post-Processing of Immunohistochemical Images.

    Science.gov (United States)

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  9. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  10. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  11. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  12. Diagnosis of skin cancer using image processing

    Science.gov (United States)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel

    2014-10-01

    In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.

  13. Production process validation of 2-[18F]-fluoro-2-deoxy-D-glucose

    International Nuclear Information System (INIS)

    Cantero, Miguel; Iglesias, Rocio; Aguilar, Juan; Sau, Pablo; Tardio, Evaristo; Narrillos, Marcos

    2003-01-01

    The aim of production process validation of 2-[18F]-fluoro-2-deoxi-D-glucose (FDG) was to check: A) equipments and services implicated in the production process were correctly installed, well documented, and worked properly, and B) production of FDG was done in a repetitive way according to predefined parameters. The main document was the Validation Master Plan, and steps were: installation qualification, operational qualification, performance qualification and validation final report. After finalization of all tests established in qualification steps without deviations, we concluded that the production process was validated because consistently produced FDG meeting its pre-determined specifications and quality characteristics (Au)

  14. Detection and Correction of Under-/Overexposed Optical Soundtracks by Coupling Image and Audio Signal Processing

    Directory of Open Access Journals (Sweden)

    Etienne Decenciere

    2008-10-01

    Full Text Available Film restoration using image processing, has been an active research field during the last years. However, the restoration of the soundtrack has been mainly performed in the sound domain, using signal processing methods, despite the fact that it is recorded as a continuous image between the images of the film and the perforations. While the very few published approaches focus on removing dust particles or concealing larger corrupted areas, no published works are devoted to the restoration of soundtracks degraded by substantial underexposure or overexposure. Digital restoration of optical soundtracks is an unexploited application field and, besides, scientifically rich, because it allows mixing both image and signal processing approaches. After introducing the principles of optical soundtrack recording and playback, this contribution focuses on our first approaches to detect and cancel the effects of under and overexposure. We intentionally choose to get a quantification of the effect of bad exposure in the 1D audio signal domain instead of 2D image domain. Our measurement is sent as feedback value to an image processing stage where the correction takes place, building up a “digital image and audio signal” closed loop processing. The approach is validated on both simulated alterations and real data.

  15. Signal and Image Processing Research at the Lawrence Livermore National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R S; Poyneer, L A; Kegelmeyer, L M; Carrano, C J; Chambers, D H; Candy, J V

    2009-06-29

    Lawrence Livermore National Laboratory is a large, multidisciplinary institution that conducts fundamental and applied research in the physical sciences. Research programs at the Laboratory run the gamut from theoretical investigations, to modeling and simulation, to validation through experiment. Over the years, the Laboratory has developed a substantial research component in the areas of signal and image processing to support these activities. This paper surveys some of the current research in signal and image processing at the Laboratory. Of necessity, the paper does not delve deeply into any one research area, but an extensive citation list is provided for further study of the topics presented.

  16. Theoretical analysis of radiographic images by nonstationary Poisson processes

    International Nuclear Information System (INIS)

    Tanaka, Kazuo; Uchida, Suguru; Yamada, Isao.

    1980-01-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process. (author)

  17. Image processing tensor transform and discrete tomography with Matlab

    CERN Document Server

    Grigoryan, Artyom M

    2012-01-01

    Focusing on mathematical methods in computer tomography, Image Processing: Tensor Transform and Discrete Tomography with MATLAB(R) introduces novel approaches to help in solving the problem of image reconstruction on the Cartesian lattice. Specifically, it discusses methods of image processing along parallel rays to more quickly and accurately reconstruct images from a finite number of projections, thereby avoiding overradiation of the body during a computed tomography (CT) scan. The book presents several new ideas, concepts, and methods, many of which have not been published elsewhere. New co

  18. Computer vision applications for coronagraphic optical alignment and image processing.

    Science.gov (United States)

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  19. Roles of medical image processing in medical physics

    International Nuclear Information System (INIS)

    Arimura, Hidetaka

    2011-01-01

    Image processing techniques including pattern recognition techniques play important roles in high precision diagnosis and radiation therapy. The author reviews a symposium on medical image information, which was held in the 100th Memorial Annual Meeting of the Japan Society of Medical Physics from September 23rd to 25th. In this symposium, we had three invited speakers, Dr. Akinobu Shimizu, Dr. Hideaki Haneishi, and Dr. Hirohito Mekata, who are active engineering researchers of segmentation, image registration, and pattern recognition, respectively. In this paper, the author reviews the roles of the medical imaging processing in medical physics field, and the talks of the three invited speakers. (author)

  20. Suitable post processing algorithms for X-ray imaging using oversampled displaced multiple images

    International Nuclear Information System (INIS)

    Thim, J; Reza, S; Nawaz, K; Norlin, B; O'Nils, M; Oelmann, B

    2011-01-01

    X-ray imaging systems such as photon counting pixel detectors have a limited spatial resolution of the pixels, based on the complexity and processing technology of the readout electronics. For X-ray imaging situations where the features of interest are smaller than the imaging system pixel size, and the pixel size cannot be made smaller in the hardware, alternative means of resolution enhancement require to be considered. Oversampling with the usage of multiple displaced images, where the pixels of all images are mapped to a final resolution enhanced image, has proven a viable method of reaching a sub-pixel resolution exceeding the original resolution. The effectiveness of the oversampling method declines with the number of images taken, the sub-pixel resolution increases, but relative to a real reduction of imaging pixel sizes yielding a full resolution image, the perceived resolution from the sub-pixel oversampled image is lower. This is because the oversampling method introduces blurring noise into the mapped final images, and the blurring relative to full resolution images increases with the oversampling factor. One way of increasing the performance of the oversampling method is by sharpening the images in post processing. This paper focus on characterizing the performance increase of the oversampling method after the use of some suitable post processing filters, for digital X-ray images specifically. The results show that spatial domain filters and frequency domain filters of the same type yield indistinguishable results, which is to be expected. The results also show that the effectiveness of applying sharpening filters to oversampled multiple images increase with the number of images used (oversampling factor), leaving 60-80% of the original blurring noise after filtering a 6 x 6 mapped image (36 images taken), where the percentage is depending on the type of filter. This means that the effectiveness of the oversampling itself increase by using sharpening

  1. Development and validation of a psychometric scale for assessing PA chest image quality: A pilot study

    International Nuclear Information System (INIS)

    Mraity, H.; England, A.; Akhtar, I.; Aslam, A.; De Lange, R.; Momoniat, H.; Nicoulaz, S.; Ribeiro, A.; Mazhir, S.; Hogg, P.

    2014-01-01

    Purpose: To develop and validate a psychometric scale for assessing image quality perception for chest X-ray images. Methods: Bandura's theory was used to guide scale development. A review of the literature was undertaken to identify items/factors which could be used to evaluate image quality using a perceptual approach. A draft scale was then created (22 items) and presented to a focus group (student and qualified radiographers). Within the focus group the draft scale was discussed and modified. A series of seven postero-anterior chest images were generated using a phantom with a range of image qualities. Image quality perception was confirmed for the seven images using signal-to-noise ratio (SNR 17.2–36.5). Participants (student and qualified radiographers and radiology trainees) were then invited to independently score each of the seven images using the draft image quality perception scale. Cronbach alpha was used to test interval reliability. Results: Fifty three participants used the scale to grade image quality perception on each of the seven images. Aggregated mean scale score increased with increasing SNR from 42.1 to 87.7 (r = 0.98, P < 0.001). For each of the 22 individual scale items there was clear differentiation of low, mid and high quality images. A Cronbach alpha coefficient of >0.7 was obtained across each of the seven images. Conclusion: This study represents the first development of a chest image quality perception scale based on Bandura's theory. There was excellent correlation between the image quality perception scores derived using the scale and the SNR. Further research will involve a more detailed item and factor analysis

  2. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); V. Capasso

    2009-01-01

    htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects

  3. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Capasso, V.

    2009-01-01

    We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video

  4. Validation of a pulsed electric field process to pasteurize strawberry puree

    Science.gov (United States)

    An inexpensive data acquisition method was developed to validate the exact number and shape of the pulses applied during pulsed electric fields (PEF) processing. The novel validation method was evaluated in conjunction with developing a pasteurization PEF process for strawberry puree. Both buffered...

  5. Characterization of snow, ice and neve by image processing

    International Nuclear Information System (INIS)

    Gay, Michel

    1999-01-01

    It is now recognized that human activities, by the extent they have achieved since the industrial era, are likely to alter the Earth's climate (IPCC, 1996). Paleo climate and the climate change models show that the polar caps are particularly sensitive to global climate change. They are more likely to play an important role but unknown on the sea level. The positive term of mass balance of polar ice sheets is the accumulation of snow, whereas the negative term is formed by the flow of ice into the oceans. The size of the polar ice caps and their hostile environment limit the amount of available field data. Only satellite remote sensing is able to provide information on geographical scales as large as Antarctica or the Arctic and allows regular monitoring over time. But to be easily interpreted, in order to deduce the snowpack characteristics observed from space (size, shape of grains, surface roughness... ), satellite data should be validated and inverted using simplified parameters. Prior to the establishment of these relations, it is necessary to develop a snow reflectance model (thesis C. Leroux 1996) taking into account the physical and optical characteristics of the snow, and a microwave emissivity model (thesis Surdyck S. 1993) that provide volume information on the morphology of the snowpack. The snowpack is characterized by several physical parameters that depend on the depth: temperature, density, size and shape of grains mainly. It is therefore essential to establish a robust and simple parameterization of the size and shape of snow grains from their observation. Image processing allows to establish these relationships and allows automatic processing of a large number of data independent of the observer. Another glaciological problem of firn is the interpretation of data obtained from the analysis of trapped air bubbles in the gas. This study implies, in particular, the dating of the ice in the firn at the close off, is necessary to determine the age of

  6. Medical Image Processing for Fully Integrated Subject Specific Whole Brain Mesh Generation

    Directory of Open Access Journals (Sweden)

    Chih-Yang Hsu

    2015-05-01

    Full Text Available Currently, anatomically consistent segmentation of vascular trees acquired with magnetic resonance imaging requires the use of multiple image processing steps, which, in turn, depend on manual intervention. In effect, segmentation of vascular trees from medical images is time consuming and error prone due to the tortuous geometry and weak signal in small blood vessels. To overcome errors and accelerate the image processing time, we introduce an automatic image processing pipeline for constructing subject specific computational meshes for entire cerebral vasculature, including segmentation of ancillary structures; the grey and white matter, cerebrospinal fluid space, skull, and scalp. To demonstrate the validity of the new pipeline, we segmented the entire intracranial compartment with special attention of the angioarchitecture from magnetic resonance imaging acquired for two healthy volunteers. The raw images were processed through our pipeline for automatic segmentation and mesh generation. Due to partial volume effect and finite resolution, the computational meshes intersect with each other at respective interfaces. To eliminate anatomically inconsistent overlap, we utilized morphological operations to separate the structures with a physiologically sound gap spaces. The resulting meshes exhibit anatomically correct spatial extent and relative positions without intersections. For validation, we computed critical biometrics of the angioarchitecture, the cortical surfaces, ventricular system, and cerebrospinal fluid (CSF spaces and compared against literature values. Volumina and surface areas of the computational mesh were found to be in physiological ranges. In conclusion, we present an automatic image processing pipeline to automate the segmentation of the main intracranial compartments including a subject-specific vascular trees. These computational meshes can be used in 3D immersive visualization for diagnosis, surgery planning with haptics

  7. Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.12

    Science.gov (United States)

    2015-09-03

    Sensor (SeaWiFS), Moderate Resolution Imaging Spectrometers ( MODIS on Aqua), and Medium Resolution Imaging Spectrometer (MERIS). This VTR documents...the MOBY site. 3.2.1 Image to Image Comparison The AOPS v4.12 upgrades are detailed in Section 2 System Description. Figure 5 shows the MODIS ...bottom images were processed with AOPS v4.12; MODIS imagery on the left and VIIRS on the right. The most significant change in the VIIRS imagery is

  8. An Image Processing Approach to Linguistic Translation

    Science.gov (United States)

    Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari

    2011-12-01

    The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.

  9. Detecting jaundice by using digital image processing

    Science.gov (United States)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

    2014-03-01

    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  10. High performance image processing of SPRINT

    Energy Technology Data Exchange (ETDEWEB)

    DeGroot, T. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  11. Effects of optimization and image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Kheddache, S.; Maansson, L.G.; Angelhed, J.E.; Denbratt, L.; Gottfridsson, B.; Schlossman, D.

    1991-01-01

    A digital system for chest radiography based on a large image intensifier was compared to a conventional film-screen system. The digital system was optimized with regard to spatial and contrast resolution and dose. The images were digitally processed for contrast and edge enhancement. A simulated pneumothorax and two and two simulated nodules were positioned over the lungs and the mediastinum of an anthro-pomorphic phantom. Observer performance was evaluated with Receiver Operating Characteristic (ROC) analysis. Five observers assessed the processed digital images and the conventional full-size radiographs. The time spent viewing the full-size radiographs and the digital images was recorded. For the simulated pneumothorax, the results showed perfect performance for the full-size radiographs and detectability was high also for the processed digital images. No significant differences in the detectability of the simulated nodules was seen between the two imaging systems. The results for the digital images showed a significantly improved detectability for the nodules in the mediastinum as compared to a previous ROC study where no optimization and image processing was available. No significant difference in detectability was seen between the former and the present ROC study for small nodules in the lung. No difference was seen in the time spent assessing the conventional full-size radiographs and the digital images. The study indicates that processed digital images produced by a large image intensifier are equal in image quality to conventional full-size radiographs for low-contrast objects such as nodules. (author). 38 refs.; 4 figs.; 1 tab

  12. Validity of whole slide images for scoring HER2 chromogenic in situ hybridisation in breast cancer

    NARCIS (Netherlands)

    Al-Janabi, Shaimaa; Horstman, Anja; van Slooten, Henk Jan; Kuijpers, Chantal; Lai-A-Fat, Clifton; van Diest, Paul J.; Jiwa, Mehdi

    2016-01-01

    Aim Whole slide images (WSIs) have stimulated a paradigm shift from conventional to digital pathology in several applications within pathology. Due to the fact that WSIs have not yet been approved for primary diagnostics, validating their use for different diagnostic purposes is still mandatory. The

  13. Validity of bioluminescence measurements for noninvasive in vivo imaging of tumor load in small animals

    NARCIS (Netherlands)

    Klerk, Clara P. W.; Overmeer, Renée M.; Niers, Tatjana M. H.; Versteeg, Henri H.; Richel, Dick J.; Buckle, Tessa; van Noorden, Cornelis J. F.; van Tellingen, Olaf

    2007-01-01

    A relatively new strategy to longitudinally monitor tumor load in intact animals and the effects of therapy is noninvasive bioluminescence imaging (BLI). The validity of BLI for quantitative assessment of tumor load in small animals is critically evaluated in the present review. Cancer cells are

  14. Development and validation of the Dutch Questionnaire God Image : Effects of mental health and religious culture

    NARCIS (Netherlands)

    Schaap-Jonker, Hanneke; Eurelings-Bonekoe, Elisabeth E.M.; Jonker, Evert R.; Zock, Hetty

    2008-01-01

    This article presents the Dutch Questionnaire God Image (QGI), which has two theory-based dimensions: feelings towards God and perceptions of God’s actions. This instrument was validated among a sample of 804 respondents, of which 244 persons received psychotherapy. Results showed relationships

  15. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  16. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  17. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  18. Poisson point processes imaging, tracking, and sensing

    CERN Document Server

    Streit, Roy L

    2010-01-01

    This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.

  19. NASA Construction of Facilities Validation Processes - Total Building Commissioning (TBCx)

    Science.gov (United States)

    Hoover, Jay C.

    2004-01-01

    Key Atributes include: Total Quality Management (TQM) System that looks at all phases of a project. A team process that spans boundaries. A Commissioning Authority to lead the process. Commissioning requirements in contracts. Independent design review to verify compliance with Facility Project Requirements (FPR). Formal written Commissioning Plan with Documented Results. Functional performance testing (FPT) against the requirements document.

  20. Definition and validation of process mining use cases

    NARCIS (Netherlands)

    Ailenei, I.; Rozinat, A.; Eckert, A.; Aalst, van der W.M.P.; Daniel, F.; Barkaoui, K.; Dustdar, S.

    2012-01-01

    Process mining is an emerging topic in the BPM marketplace. Recently, several (commercial) software solutions have become available. Due to the lack of an evaluation framework, it is very difficult for potential users to assess the strengths and weaknesses of these process mining tools. As the first

  1. Simple process capability analysis and quality validation of ...

    African Journals Online (AJOL)

    Many ways can be applied to improve the process and one of them is by choosing the correct six sigma's design of experiment (DOE). In this study, Taguchi's experimental design was applied to achieve high percentage of cell viability in the fermentation experiment. The process capability of this study was later analyzed by ...

  2. Validity and reliability of the Multidimensional Body Image Scale in Malaysian university students.

    Science.gov (United States)

    Gan, W Y; Mohd, Nasir M T; Siti, Aishah H; Zalilah, M S

    2012-12-01

    This study aimed to evaluate the validity and reliability of the Multidimensional Body Image Scale (MBIS), a seven-factor, 62-item scale developed for Malaysian female adolescents. This scale was evaluated among male and female Malaysian university students. A total of 671 university students (52.2% women and 47.8% men) completed a self-administered questionnaire on MBIS, Eating Attitude Test-26, and Rosenberg Self-Esteem Scale. Their height and weight were measured. Results in confirmatory factor analysis showed that the 62-item MBIS reported poor fit to the data, xhi2/df = 4.126, p self-esteem. Also, this scale discriminated well between participants with and without disordered eating. The MBIS-46 demonstrated good reliability and validity for the evaluation of body image among university students. Further studies need to be conducted to confirm the validation results of the 46-item MBIS.

  3. Validation of Diagnostic Imaging Based on Repeat Examinations. An Image Interpretation Model

    International Nuclear Information System (INIS)

    Isberg, B.; Jorulf, H.; Thorstensen, Oe.

    2004-01-01

    Purpose: To develop an interpretation model, based on repeatedly acquired images, aimed at improving assessments of technical efficacy and diagnostic accuracy in the detection of small lesions. Material and Methods: A theoretical model is proposed. The studied population consists of subjects that develop focal lesions which increase in size in organs of interest during the study period. The imaging modality produces images that can be re-interpreted with high precision, e.g. conventional radiography, computed tomography, and magnetic resonance imaging. At least four repeat examinations are carried out. Results: The interpretation is performed in four or five steps: 1. Independent readers interpret the examinations chronologically without access to previous or subsequent films. 2. Lesions found on images at the last examination are included in the analysis, with interpretation in consensus. 3. By concurrent back-reading in consensus, the lesions are identified on previous images until they are so small that even in retrospect they are undetectable. The earliest examination at which included lesions appear is recorded, and the lesions are verified by their growth (imaging reference standard). Lesion size and other characteristics may be recorded. 4. Records made at step 1 are corrected to those of steps 2 and 3. False positives are recorded. 5. (Optional) Lesion type is confirmed by another diagnostic test. Conclusion: Applied on subjects with progressive disease, the proposed image interpretation model may improve assessments of technical efficacy and diagnostic accuracy in the detection of small focal lesions. The model may provide an accurate imaging reference standard as well as repeated detection rates and false-positive rates for tested imaging modalities. However, potential review bias necessitates a strict protocol

  4. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  5. High-resolution multi-band imaging for validation and characterization of small Kepler planets

    International Nuclear Information System (INIS)

    Everett, Mark E.; Silva, David R.; Barclay, Thomas; Howell, Steve B.; Ciardi, David R.; Horch, Elliott P.; Crepp, Justin R.

    2015-01-01

    High-resolution ground-based optical speckle and near-infrared adaptive optics images are taken to search for stars in close angular proximity to host stars of candidate planets identified by the NASA Kepler Mission. Neighboring stars are a potential source of false positive signals. These stars also blend into Kepler light curves, affecting estimated planet properties, and are important for an understanding of planets in multiple star systems. Deep images with high angular resolution help to validate candidate planets by excluding potential background eclipsing binaries as the source of the transit signals. A study of 18 Kepler Object of Interest stars hosting a total of 28 candidate and validated planets is presented. Validation levels are determined for 18 planets against the likelihood of a false positive from a background eclipsing binary. Most of these are validated at the 99% level or higher, including five newly validated planets in two systems: Kepler-430 and Kepler-431. The stellar properties of the candidate host stars are determined by supplementing existing literature values with new spectroscopic characterizations. Close neighbors of seven of these stars are examined using multi-wavelength photometry to determine their nature and influence on the candidate planet properties. Most of the close neighbors appear to be gravitationally bound secondaries, while a few are best explained as closely co-aligned field stars. Revised planet properties are derived for each candidate and validated planet, including cases where the close neighbors are the potential host stars.

  6. The Pan-STARRS PS1 Image Processing Pipeline

    Science.gov (United States)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  7. Signal and image processing for monitoring and testing at EDF

    International Nuclear Information System (INIS)

    Georgel, B.; Garreau, D.

    1992-04-01

    The quality of monitoring and non destructive testing devices in plants and utilities today greatly depends on the efficient processing of signal and image data. In this context, signal or image processing techniques, such as adaptive filtering or detection or 3D reconstruction, are required whenever manufacturing nonconformances or faulty operation have to be recognized and identified. This paper reviews the issues of industrial image and signal processing, by briefly considering the relevant studies and projects under way at EDF. (authors). 1 fig., 11 refs

  8. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG

    2016-02-01

    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  9. Using process elicitation and validation to understand and improve chemotherapy ordering and delivery.

    Science.gov (United States)

    Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L

    2012-11-01

    Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

  10. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  11. Creation and Validation of the Self-esteem/Self-image Female Sexuality (SESIFS Questionnaire

    Directory of Open Access Journals (Sweden)

    Maria C.O. Lordello

    2014-01-01

    Full Text Available Introduction Self-esteem and self-image are psychological aspects that affect sexual function. AIMS To validate a new measurement tool that correlates the concepts of self-esteem, self-image, and sexuality. Methods A 20-question test (the self-esteem/self-image female sexuality [SESIFS] questionnaire was created and tested on 208 women. Participants answered: Rosenberg's self-esteem scale, the female sexual quotient (FSQ, and the SESIFS questionnaire. Pearson's correlation coefficient was used to test concurrent validity of the SESIFS against Rosenberg's self-esteem scale and the FSQ. Reliability was tested using the Cronbach's alpha coefficient. Result The new questionnaire had a good overall reliability (Cronbach's alpha r = 0.862, p < 0.001, but the sexual domain scored lower than expected ( r = 0.65. The validity was good: overall score r = 0.38, p < 0.001, self-esteem domain r = 0.32, p < 0.001, self-image domain r = 0.31, p < 0.001, sexual domain r = 0.29, p < 0.001. Conclusions The SESIFS questionnaire has limitations in measuring the correlation among self-esteem, self-image, and sexuality domains. A new, revised version is being tested and will be presented in an upcoming publication.

  12. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    Science.gov (United States)

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of

  13. A study on the validity of strategic classification processes

    International Nuclear Information System (INIS)

    Tae, Jae Woong; Shin, Dong Hun

    2013-01-01

    The commodity classification is to identify strategic commodity. The export license is to verify that exports have met the conditions required by the international export control system. NSSC (Nuclear Safety and Security Commission) operates the NEPS (Nuclear Export Promotion Service) for export control of nuclear items. NEPS contributed to reduce process time related to submission of documents, issuing certificates and licenses, etc. Nonetheless, it became necessary to enhance capacity to implement export control precisely and efficiently as development of Korean nuclear industry led to sharp increase of export. To provide more efficient ways, development of the advanced export control system, IXCS (Intelligent eXport Control System) was suggested. To build IXCS successfully, export control experts have analyzed Korean export control system. Two classification processes of items and technology were derived as a result of the research. However, it may reflect real cases insufficiently because it is derived by experts' discussion. This study evaluated how well the process explains real cases. Although the derived processes explained real cases well, some recommendations for improvement were found through this study. These evaluation results will help to make classification flow charts more compatible to the current export system. Most classification reports on equipment and material deliberated specification and functions while related systems were not considered. If a 'specification review' stage is added to the current process and delete unnecessary stages, this will improve accuracy of the flow chart. In the classification of nuclear technology, detailed process to identify specific information and data need to be specified to decrease subjectivity. Whether they are imitations or not is an unnecessary factor in both processes. The successful development of IXCS needs accurate export control processes as well as IT technology. If these classification processes are

  14. Digital image processing for radiography in nuclear power plants

    International Nuclear Information System (INIS)

    Heidt, H.; Rose, P.; Raabe, P.; Daum, W.

    1985-01-01

    With the help of digital processing of radiographic images from reactor-components it is possible to increase the security and objectiveness of the evaluation. Several examples of image processing procedures (contrast enhancement, density profiles, shading correction, digital filtering, superposition of images etc.) show the advantages for the visualization and evaluation of radiographs. Digital image processing can reduce some of the restrictions of radiography in nuclear power plants. In addition a higher degree of automation can be cost-saving and increase the quality of radiographic evaluation. The aim of the work performed was to to improve the readability of radiographs for the human observer. The main problem is lack of contrast and the presence of disturbing structures like weld seams. Digital image processing of film radiographs starts with the digitization of the image. Conventional systems use TV-cameras or scanners and provide a dynamic range of 1.5. to 3 density units, which are digitized to 256 grey levels. For the enhancement process it is necessary that the grey level range covers the density range of the important regions of the presented film. On the other hand the grey level coverage should not be wider than necessary to minimize the width of digitization steps. Poor digitization makes flaws and cracks invisible and spoils all further image processing

  15. Validation process of ISIS CFD software for fire simulation

    International Nuclear Information System (INIS)

    Lapuerta, C.; Suard, S.; Babik, F.; Rigollet, L.

    2012-01-01

    Fire propagation constitutes a major safety concern in nuclear facilities. In this context, IRSN is developing a CFD code, named ISIS, dedicated to fire simulations. This software is based on a coherent set of models that can be used to describe a fire in large, mechanically ventilated compartments. The system of balance equations obtained by combining these models is discretized in time using fractional step methods, including a pressure correction technique for solving hydrodynamic equations. Discretization in space combines two techniques, each proven in the relevant context: mixed finite elements for hydrodynamic equations and finite volumes for transport equations. ISIS is currently in an advanced stage of verification and validation. The results obtained for a full-scale fire test performed at IRSN are presented.

  16. Reliability and validity of the body image quality of life inventory: version for Brazilian burn victims.

    Science.gov (United States)

    Assunção, Flávia Fernanda Oliveira; Dantas, Rosana Aparecida Spadoti; Ciol, Márcia Aparecida; Gonçalves, Natália; Farina, Jayme Adriano; Rossi, Lidia Aparecida

    2013-06-01

    The aims of this study were to adapt the Body Image Quality of Life Inventory (BIQLI) into Brazilian Portuguese (BP) and to assess the psychometric properties of the adapted version. Construct validity was assessed by correlating the BIQLI-BP scores with the Rosenberg's Self-Esteem Scale, with Burns Specific Health Scale-Revised (BSHS-R), and with gender, total body surface area burned, and visibility of the scars. Participants were 77 adult burn patients. Cronbach's alpha for the adapted version was .90 and moderate linear correlations were found between body image and self-esteem and between BIQLI-BP scores and two domains of the BSHS-R: affect and body image and interpersonal relationships. The BIQLI-BP showed acceptable levels of reliability and validity for Brazilian burn patients. Copyright © 2013 Wiley Periodicals, Inc.

  17. Ultrasonic particle image velocimetry for improved flow gradient imaging: algorithms, methodology and validation

    International Nuclear Information System (INIS)

    Niu Lili; Qian Ming; Yu Wentao; Jin Qiaofeng; Ling Tao; Zheng Hairong; Wan Kun; Gao Shen

    2010-01-01

    This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.

  18. Processing of hyperspectral medical images applications in dermatology using Matlab

    CERN Document Server

    Koprowski, Robert

    2017-01-01

    This book presents new methods of analyzing and processing hyperspectral medical images, which can be used in diagnostics, for example for dermatological images. The algorithms proposed are fully automatic and the results obtained are fully reproducible. Their operation was tested on a set of several thousands of hyperspectral images and they were implemented in Matlab. The presented source code can be used without licensing restrictions. This is a valuable resource for computer scientists, bioengineers, doctoral students, and dermatologists interested in contemporary analysis methods.

  19. Subband/Transform MATLAB Functions For Processing Images

    Science.gov (United States)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  20. A very high energy imaging for radioactive wastes processing

    International Nuclear Information System (INIS)

    Moulin, V.; Pettier, J.L.

    2004-01-01

    The X imaging occurs at a lot of steps of the radioactive wastes processing: selection for conditioning, physical characterization with a view to radiological characterization, quality control of the product before storage, transport or disposal. Size and volume of the objects considered here necessitate to work with very high energy systems. Here is shown, through some examples, in which conditions this X imaging is carried out as well as the contribution of the obtained images. (O.M.)

  1. Digital Image Processing Overview For Helmet Mounted Displays

    Science.gov (United States)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  2. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Color sensitivity of the multi-exposure HDR imaging process

    Science.gov (United States)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  4. Image recognition on raw and processed potato detection: a review

    Science.gov (United States)

    Qi, Yan-nan; Lü, Cheng-xu; Zhang, Jun-ning; Li, Ya-shuo; Zeng, Zhen; Mao, Wen-hua; Jiang, Han-lu; Yang, Bing-nan

    2018-02-01

    Objective: Chinese potato staple food strategy clearly pointed out the need to improve potato processing, while the bottleneck of this strategy is technology and equipment of selection of appropriate raw and processed potato. The purpose of this paper is to summarize the advanced raw and processed potato detection methods. Method: According to consult research literatures in the field of image recognition based potato quality detection, including the shape, weight, mechanical damage, germination, greening, black heart, scab potato etc., the development and direction of this field were summarized in this paper. Result: In order to obtain whole potato surface information, the hardware was built by the synchronous of image sensor and conveyor belt to achieve multi-angle images of a single potato. Researches on image recognition of potato shape are popular and mature, including qualitative discrimination on abnormal and sound potato, and even round and oval potato, with the recognition accuracy of more than 83%. Weight is an important indicator for potato grading, and the image classification accuracy presents more than 93%. The image recognition of potato mechanical damage focuses on qualitative identification, with the main affecting factors of damage shape and damage time. The image recognition of potato germination usually uses potato surface image and edge germination point. Both of the qualitative and quantitative detection of green potato have been researched, currently scab and blackheart image recognition need to be operated using the stable detection environment or specific device. The image recognition of processed potato mainly focuses on potato chips, slices and fries, etc. Conclusion: image recognition as a food rapid detection tool have been widely researched on the area of raw and processed potato quality analyses, its technique and equipment have the potential for commercialization in short term, to meet to the strategy demand of development potato as

  5. Some aspects of image processing using foams

    International Nuclear Information System (INIS)

    Tufaile, A.; Freire, M.V.; Tufaile, A.P.B.

    2014-01-01

    We have explored some concepts of chaotic dynamics and wave light transport in foams. Using some experiments, we have obtained the main features of light intensity distribution through foams. We are proposing a model for this phenomenon, based on the combination of two processes: a diffusive process and another one derived from chaotic dynamics. We have presented a short outline of the chaotic dynamics involving light scattering in foams. We also have studied the existence of caustics from scattering of light from foams, with typical patterns observed in the light diffraction in transparent films. The nonlinear geometry of the foam structure was explored in order to create optical elements, such as hyperbolic prisms and filters. - Highlights: • We have obtained the light scattering in foams using experiments. • We model the light transport in foams using a chaotic dynamics and a diffusive process. • An optical filter based on foam is proposed

  6. Predictive images of postoperative levator resection outcome using image processing software.

    Science.gov (United States)

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  7. Process chain validation in micro and nano replication

    DEFF Research Database (Denmark)

    Calaon, Matteo

    to quantification of replication quality over large areas of surface topography based on areal detection technique and angular diffraction measurements were developed. A series of injection molding and compression molding experiments aimed at process analysis and optimization showed the possibility to control...... features dimensional accuracy variation through the identification of relevant process parameters. Statistical design of experiment results, showed the influence of both process parameters (mold temperature, packing time, packing pressure) and design parameters (channel width and direction with respect......Innovations in nanotechnology propose applications integrating micro and nanometer structures fabricated as master geometries for final replication on polymer substrates. The possibility for polymer materials of being processed with technologies enabling large volume production introduces solutions...

  8. Simple process capability analysis and quality validation of ...

    African Journals Online (AJOL)

    GREGORY

    2011-12-16

    Dec 16, 2011 ... University Malaysia, Gombak, P.O. Box 10, 50728 Kuala Lumpur, Malaysia. Accepted 7 .... used in the manufacturing industry as a process perform- ance indicator. ... Six Sigma for Electronics design and manufacturing.

  9. PolyNano M.6.1.1 Process validation state-of-the-art

    DEFF Research Database (Denmark)

    Tosello, Guido; Hansen, Hans Nørgaard; Calaon, Matteo

    2012-01-01

    Nano project. Methods for replication process validation are presented and will be further investigated in WP6 “Process Chain Validation” and applied to PolyNano study cases. Based on the available information, effective best practice standard process validation will be defined and implemented...... assessment methods, and presents measuring procedures/techniques suitable for replication fidelity studies. The report reviews state‐of‐the‐art research results regarding replication obtained at different scales, tooling technologies based on surface replication, process validation trough design...

  10. Guideline validation in multiple trauma care through business process modeling.

    Science.gov (United States)

    Stausberg, Jürgen; Bilir, Hüseyin; Waydhas, Christian; Ruchholtz, Steffen

    2003-07-01

    Clinical guidelines can improve the quality of care in multiple trauma. In our Department of Trauma Surgery a specific guideline is available paper-based as a set of flowcharts. This format is appropriate for the use by experienced physicians but insufficient for electronic support of learning, workflow and process optimization. A formal and logically consistent version represented with a standardized meta-model is necessary for automatic processing. In our project we transferred the paper-based into an electronic format and analyzed the structure with respect to formal errors. Several errors were detected in seven error categories. The errors were corrected to reach a formally and logically consistent process model. In a second step the clinical content of the guideline was revised interactively using a process-modeling tool. Our study reveals that guideline development should be assisted by process modeling tools, which check the content in comparison to a meta-model. The meta-model itself could support the domain experts in formulating their knowledge systematically. To assure sustainability of guideline development a representation independent of specific applications or specific provider is necessary. Then, clinical guidelines could be used for eLearning, process optimization and workflow management additionally.

  11. A novel data processing technique for image reconstruction of penumbral imaging

    Science.gov (United States)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  12. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail: M.H.Yap@lboro.ac.uk; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2010-03-15

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  13. Processed images in human perception: A case study in ultrasound breast imaging

    International Nuclear Information System (INIS)

    Yap, Moi Hoon; Edirisinghe, Eran; Bez, Helmut

    2010-01-01

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  14. Image processing. A system for the automatic sorting of chromosomes

    International Nuclear Information System (INIS)

    Najai, Amor

    1977-01-01

    The present paper deals with two aspects of the system: - an automata (specialized hardware) dedicated to image processing. Images are digitized, divided into sub-units and computations are carried out on their main parameters. - A software for the automatic recognition and sorting of chromosomes is implemented on a Multi-20 minicomputer, connected to the automata. (author) [fr

  15. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  16. Detection of optimum maturity of maize using image processing and ...

    African Journals Online (AJOL)

    A CCD camera for image acquisition of the different green colorations of the maize leaves at maturity was used. Different color features were extracted from the image processing system (MATLAB) and used as inputs to the artificial neural network that classify different levels of maturity. Keywords: Maize, Maturity, CCD ...

  17. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  18. Automatic image processing as a means of safeguarding nuclear material

    International Nuclear Information System (INIS)

    Kahnmeyer, W.; Willuhn, K.; Uebel, W.

    1985-01-01

    Problems involved in computerized analysis of pictures taken by automatic film or video cameras in the context of international safeguards implementation are described. They include technical ones as well as the need to establish objective criteria for assessing image information. In the near future automatic image processing systems will be useful in verifying the identity and integrity of IAEA seals. (author)

  19. Pattern recognition and expert image analysis systems in biomedical image processing (Invited Paper)

    Science.gov (United States)

    Oosterlinck, A.; Suetens, P.; Wu, Q.; Baird, M.; F. M., C.

    1987-09-01

    This paper gives an overview of pattern recoanition techniques (P.R.) used in biomedical image processing and problems related to the different P.R. solutions. Also the use of knowledge based systems to overcome P.R. difficulties, is described. This is illustrated by a common example ofabiomedical image processing application.

  20. SmartWeld/SmartProcess - intelligent model based system for the design and validation of welding processes

    Energy Technology Data Exchange (ETDEWEB)

    Mitchner, J.

    1996-04-01

    Diagrams are presented on an intelligent model based system for the design and validation of welding processes. Key capabilities identified include `right the first time` manufacturing, continuous improvement, and on-line quality assurance.

  1. Validation of hand and foot anatomical feature measurements from smartphone images

    Science.gov (United States)

    Amini, Mohammad; Vasefi, Fartash; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application, previously presented as a tool for individuals with hand arthritis to assess and monitor the progress of their disease, has been modified and expanded to include extraction of anatomical features from the hand (joint/finger width, and angulation) and foot (length, width, big toe angle, and arch height index) from smartphone camera images. Image processing algorithms and automated measurements were validated by performing tests on digital hand models, rigid plastic hand models, and real human hands and feet to determine accuracy and reproducibility compared to conventional measurement tools such as calipers, rulers, and goniometers. The mobile application was able to provide finger joint width measurements with accuracy better than 0.34 (+/-0.25) millimeters. Joint angulation measurement accuracy was better than 0.50 (+/-0.45) degrees. The automatically calculated foot length accuracy was 1.20 (+/-1.27) millimeters and the foot width accuracy was 1.93 (+/-1.92) millimeters. Hallux valgus angle (used in assessing bunions) accuracy was 1.30 (+/-1.29) degrees. Arch height index (AHI) measurements had an accuracy of 0.02 (+/-0.01). Combined with in-app documentation of symptoms, treatment, and lifestyle factors, the anatomical feature measurements can be used by both healthcare professionals and manufacturers. Applications include: diagnosing hand osteoarthritis; providing custom finger splint measurements; providing compression glove measurements for burn and lymphedema patients; determining foot dimensions for custom shoe sizing, insoles, orthotics, or foot splints; and assessing arch height index and bunion treatment effectiveness.

  2. Application of digital image processing for the generation of voxels phantoms for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Boia, L.S.; Menezes, A.F.; Cardoso, M.A.C. [Programa de Engenharia Nuclear/COPPE (Brazil); Rosa, L.A.R. da [Instituto de Radioprotecao e Dosimetria-IRD, Av. Salvador Allende, s/no Recreio dos Bandeirantes, CP 37760, CEP 22780-160 Rio de Janeiro, RJ (Brazil); Batista, D.V.S. [Instituto de Radioprotecao e Dosimetria-IRD, Av. Salvador Allende, s/no Recreio dos Bandeirantes, CP 37760, CEP 22780-160 Rio de Janeiro, RJ (Brazil); Instituto Nacional de Cancer-Secao de Fisica Medica, Praca Cruz Vermelha, 23-Centro, 20230-130 Rio de Janeiro, RJ (Brazil); Cardoso, S.C. [Departamento de Fisica Nuclear, Instituto de Fisica, Universidade Federal do Rio de Janeiro, Bloco A-Sala 307, CP 68528, CEP 21941-972 Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.br [Programa de Engenharia Nuclear/COPPE (Brazil); Departamento de Engenharia Nuclear/Escola Politecnica, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970 Rio de Janeiro, RJ (Brazil); Facure, A. [Comissao Nacional de Energia Nuclear, R. Gal. Severiano 90, sala 409, 22294-900 Rio de Janeiro, RJ (Brazil)

    2012-01-15

    This paper presents the application of a computational methodology for optimizing the conversion of medical tomographic images in voxel anthropomorphic models for simulation of radiation transport using the MCNP code. A computational system was developed for digital image processing that compresses the information from the DICOM medical image before it is converted to the Scan2MCNP software input file for optimization of the image data. In order to validate the computational methodology, a radiosurgery treatment simulation was performed using the Alderson Rando phantom and the acquisition of DICOM images was performed. The simulation results were compared with data obtained with the BrainLab planning system. The comparison showed good agreement for three orthogonal treatment beams of {sup 60}Co gamma radiation. The percentage differences were 3.07%, 0.77% and 6.15% for axial, coronal and sagital projections, respectively. - Highlights: Black-Right-Pointing-Pointer We use a method to optimize the CT image conversion in voxel model for MCNP simulation. Black-Right-Pointing-Pointer We present a methodology to compress a DICOM image before conversion to input file. Black-Right-Pointing-Pointer To validate this study an idealized radiosurgery applied to the Alderson phantom was used.

  3. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  4. Parallel Hyperspectral Image Processing on Distributed Multi-Cluster Systems

    NARCIS (Netherlands)

    Liu, F.; Seinstra, F.J.; Plaza, A.J.

    2011-01-01

    Computationally efficient processing of hyperspectral image cubes can be greatly beneficial in many application domains, including environmental modeling, risk/hazard prevention and response, and defense/security. As individual cluster computers often cannot satisfy the computational demands of

  5. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  6. The Digital Microscope and Its Image Processing Utility

    Directory of Open Access Journals (Sweden)

    Tri Wahyu Supardi

    2011-12-01

    Full Text Available Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images of the object being observed. The proposed microscope is constructed from hardware components that can be easily found in Indonesia. The image processing software is capable of performing brightness adjustment, contrast enhancement, histogram equalization, scaling and cropping. The proposed digital microscope has a maximum magnification of 1600x, and image resolution can be varied from 320x240 pixels up to 2592x1944 pixels. The microscope was tested with various objects with a variety of magnification, and image processing was carried out on the image of the object. The results showed that the digital microscope and its image processing system were capable of enhancing the observed object and other operations in accordance with the user need. The digital microscope has eliminated the need for direct observation by human eye as with the traditional microscope.

  7. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  8. Best practice strategies for validation of micro moulding process simulation

    DEFF Research Database (Denmark)

    Costa, Franco; Tosello, Guido; Whiteside, Ben

    2009-01-01

    are the optimization of the moulding process and of the tool using simulation techniques. Therefore, in polymer micro manufacturing technology, software simulation tools adapted from conventional injection moulding can provide useful assistance for the optimization of moulding tools, mould inserts, micro component...... are discussed. Recommendations regarding sampling rate, meshing quality, filling analysis methods (micro short shots, flow visualization) and machine geometry modelling are given on the basis of the comparison between simulated and experimental results within the two considered study cases.......Simulation programs in polymer micro replication technology are used for the same reasons as in conventional injection moulding. To avoid the risks of costly re-engineering, the moulding process is simulated before starting the actual manufacturing process. Important economic factors...

  9. Validation of a Smartphone Image-Based Dietary Assessment Method for Pregnant Women

    Directory of Open Access Journals (Sweden)

    Amy M. Ashman

    2017-01-01

    Full Text Available Image-based dietary records could lower participant burden associated with traditional prospective methods of dietary assessment. They have been used in children, adolescents and adults, but have not been evaluated in pregnant women. The current study evaluated relative validity of the DietBytes image-based dietary assessment method for assessing energy and nutrient intakes. Pregnant women collected image-based dietary records (via a smartphone application of all food, drinks and supplements consumed over three non-consecutive days. Intakes from the image-based method were compared to intakes collected from three 24-h recalls, taken on random days; once per week, in the weeks following the image-based record. Data were analyzed using nutrient analysis software. Agreement between methods was ascertained using Pearson correlations and Bland-Altman plots. Twenty-five women (27 recruited, one withdrew, one incomplete, median age 29 years, 15 primiparas, eight Aboriginal Australians, completed image-based records for analysis. Significant correlations between the two methods were observed for energy, macronutrients and fiber (r = 0.58–0.84, all p < 0.05, and for micronutrients both including (r = 0.47–0.94, all p < 0.05 and excluding (r = 0.40–0.85, all p < 0.05 supplements in the analysis. Bland-Altman plots confirmed acceptable agreement with no systematic bias. The DietBytes method demonstrated acceptable relative validity for assessment of nutrient intakes of pregnant women.

  10. Digital image processing applied Rock Art tracing

    Directory of Open Access Journals (Sweden)

    Montero Ruiz, Ignacio

    1998-06-01

    Full Text Available Adequate graphic recording has been one of the main objectives of rock art research. Photography has increased its role as a documentary technique. Now, digital image and its treatment allows new ways to observe the details of the figures and to develop a recording procedure which is as, or more, accurate than direct tracing. This technique also avoid deterioration of the rock paintings. The mathematical basis of this method is also presented.

    La correcta documentación del arte rupestre ha sido una preocupación constante por parte de los investigadores. En el desarrollo de nuevas técnicas de registro, directas e indirectas, la fotografía ha ido adquiriendo mayor protagonismo. La imagen digital y su tratamiento permiten nuevas posibilidades de observación de las figuras representadas y, en consecuencia, una lectura mediante la realización de calcos indirectos de tanta o mayor fiabilidad que la observación directa. Este sistema evita los riesgos de deterioro que provocan los calcos directos. Se incluyen las bases matemáticas que sustentan el método.

  11. Arabidopsis Growth Simulation Using Image Processing Technology

    Directory of Open Access Journals (Sweden)

    Junmei Zhang

    2014-01-01

    Full Text Available This paper aims to provide a method to represent the virtual Arabidopsis plant at each growth stage. It includes simulating the shape and providing growth parameters. The shape is described with elliptic Fourier descriptors. First, the plant is segmented from the background with the chromatic coordinates. With the segmentation result, the outer boundary series are obtained by using boundary tracking algorithm. The elliptic Fourier analysis is then carried out to extract the coefficients of the contour. The coefficients require less storage than the original contour points and can be used to simulate the shape of the plant. The growth parameters include total area and the number of leaves of the plant. The total area is obtained with the number of the plant pixels and the image calibration result. The number of leaves is derived by detecting the apex of each leaf. It is achieved by using wavelet transform to identify the local maximum of the distance signal between the contour points and the region centroid. Experiment result shows that this method can record the growth stage of Arabidopsis plant with fewer data and provide a visual platform for plant growth research.

  12. SIP: A Web-Based Astronomical Image Processing Program

    Science.gov (United States)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  13. TALYS/TENDL verification and validation processes: Outcomes and recommendations

    Science.gov (United States)

    Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark R.; Koning, Arjan; Rochman, Dimitri

    2017-09-01

    The TALYS-generated Evaluated Nuclear Data Libraries (TENDL) provide truly general-purpose nuclear data files assembled from the outputs of the T6 nuclear model codes system for direct use in both basic physics and engineering applications. The most recent TENDL-2015 version is based on both default and adjusted parameters of the most recent TALYS, TAFIS, TANES, TARES, TEFAL, TASMAN codes wrapped into a Total Monte Carlo loop for uncertainty quantification. TENDL-2015 contains complete neutron-incident evaluations for all target nuclides with Z ≤116 with half-life longer than 1 second (2809 isotopes with 544 isomeric states), up to 200 MeV, with covariances and all reaction daughter products including isomers of half-life greater than 100 milliseconds. With the added High Fidelity Resonance (HFR) approach, all resonances are unique, following statistical rules. The validation of the TENDL-2014/2015 libraries against standard, evaluated, microscopic and integral cross sections has been performed against a newly compiled UKAEA database of thermal, resonance integral, Maxwellian averages, 14 MeV and various accelerator-driven neutron source spectra. This has been assembled using the most up-to-date, internationally-recognised data sources including the Atlas of Resonances, CRC, evaluated EXFOR, activation databases, fusion, fission and MACS. Excellent agreement was found with a small set of errors within the reference databases and TENDL-2014 predictions.

  14. TALYS/TENDL verification and validation processes: Outcomes and recommendations

    Directory of Open Access Journals (Sweden)

    Fleming Michael

    2017-01-01

    Full Text Available The TALYS-generated Evaluated Nuclear Data Libraries (TENDL provide truly general-purpose nuclear data files assembled from the outputs of the T6 nuclear model codes system for direct use in both basic physics and engineering applications. The most recent TENDL-2015 version is based on both default and adjusted parameters of the most recent TALYS, TAFIS, TANES, TARES, TEFAL, TASMAN codes wrapped into a Total Monte Carlo loop for uncertainty quantification. TENDL-2015 contains complete neutron-incident evaluations for all target nuclides with Z ≤116 with half-life longer than 1 second (2809 isotopes with 544 isomeric states, up to 200 MeV, with covariances and all reaction daughter products including isomers of half-life greater than 100 milliseconds. With the added High Fidelity Resonance (HFR approach, all resonances are unique, following statistical rules. The validation of the TENDL-2014/2015 libraries against standard, evaluated, microscopic and integral cross sections has been performed against a newly compiled UKAEA database of thermal, resonance integral, Maxwellian averages, 14 MeV and various accelerator-driven neutron source spectra. This has been assembled using the most up-to-date, internationally-recognised data sources including the Atlas of Resonances, CRC, evaluated EXFOR, activation databases, fusion, fission and MACS. Excellent agreement was found with a small set of errors within the reference databases and TENDL-2014 predictions.

  15. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  16. Application of digital image processing to industrial radiography

    International Nuclear Information System (INIS)

    Bodson; Varcin; Crescenzo; Theulot

    1985-01-01

    Radiography is widely used for quality control of fabrication of large reactor components. Image processing methods are applied to industrial radiographs in order to help to take a decision as well as to reduce costs and delays for examination. Films, performed in representative operating conditions, are used to test results obtained with algorithms for the restauration of images and for the detection, characterisation of indications in order to determine the possibility of an automatic radiographs processing [fr

  17. Material model validation for laser shock peening process simulation

    International Nuclear Information System (INIS)

    Amarchinta, H K; Grandhi, R V; Langer, K; Stargel, D S

    2009-01-01

    Advanced mechanical surface enhancement techniques have been used successfully to increase the fatigue life of metallic components. These techniques impart deep compressive residual stresses into the component to counter potentially damage-inducing tensile stresses generated under service loading. Laser shock peening (LSP) is an advanced mechanical surface enhancement technique used predominantly in the aircraft industry. To reduce costs and make the technique available on a large-scale basis for industrial applications, simulation of the LSP process is required. Accurate simulation of the LSP process is a challenging task, because the process has many parameters such as laser spot size, pressure profile and material model that must be precisely determined. This work focuses on investigating the appropriate material model that could be used in simulation and design. In the LSP process material is subjected to strain rates of 10 6  s −1 , which is very high compared with conventional strain rates. The importance of an accurate material model increases because the material behaves significantly different at such high strain rates. This work investigates the effect of multiple nonlinear material models for representing the elastic–plastic behavior of materials. Elastic perfectly plastic, Johnson–Cook and Zerilli–Armstrong models are used, and the performance of each model is compared with available experimental results

  18. Contour extraction of echocardiographic images based on pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana [Department of Multimedia, Faculty of Computer Science and Information Technology, Department of Computer and Communication Systems Engineering, Faculty of Engineering University Putra Malaysia 43400 Serdang, Selangor (Malaysia); Zamrin, D M [Department of Surgery, Faculty of Medicine, National University of Malaysia, 56000 Cheras, Kuala Lumpur (Malaysia); Saripan, M Iqbal

    2011-02-15

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  19. Contour extraction of echocardiographic images based on pre-processing

    International Nuclear Information System (INIS)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana; Zamrin, D M; Saripan, M Iqbal

    2011-01-01

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  20. Body image in patients with adolescent idiopathic scoliosis: validation of the Body Image Disturbance Questionnaire--Scoliosis Version.

    Science.gov (United States)

    Auerbach, Joshua D; Lonner, Baron S; Crerand, Canice E; Shah, Suken A; Flynn, John M; Bastrom, Tracey; Penn, Phedra; Ahn, Jennifer; Toombs, Courtney; Bharucha, Neil; Bowe, Whitney P; Newton, Peter O

    2014-04-16

    Appearance concerns in individuals with adolescent idiopathic scoliosis can result in impairment in daily functioning, or body image disturbance. The Body Image Disturbance Questionnaire (BIDQ) is a self-reported, seven-question instrument that measures body image disturbance in general populations; no studies have specifically examined body image disturbance in those with adolescent idiopathic scoliosis. This study aimed to validate a modified version of the BIDQ in a population with adolescent idiopathic scoliosis and to establish discriminant validity by comparing responses of operatively and nonoperatively treated patients with those of normal controls. In the first phase, a multicenter study of forty-nine patients (mean age, fourteen years; thirty-seven female) with adolescent idiopathic scoliosis was performed to validate the BIDQ-Scoliosis version (BIDQ-S). Participants completed the BIDQ-S, Scoliosis Research Society (SRS)-22, Children's Depression Index (CDI), and Body Esteem Scale for Adolescents and Adults (BESAA) questionnaires. Descriptive statistics and Pearson correlation coefficients were calculated. In the second phase, ninety-eight patients with adolescent idiopathic scoliosis (mean age, 15.7 years; seventy-five female) matched by age and sex with ninety-eight healthy adolescents were enrolled into a single-center study to evaluate the discriminant validity of the BIDQ-S. Subjects completed the BIDQ-S and a demographic form before treatment. Independent-sample t tests and Pearson correlation coefficients were calculated. The BIDQ-S was internally consistent (Cronbach alpha = 0.82), and corrected item total correlations ranged from 0.47 to 0.67. The BIDQ-S was significantly correlated with each domain of the SRS-22 and the total score (r = -0.50 to -0.72, p ≤ 0.001), with the CDI (r = 0.31, p = 0.03), and with the BESAA (r = 0.60, p image disturbance compared with healthy controls. To our knowledge, this user-friendly instrument is the first to

  1. Validation of the process control system of an automated large scale manufacturing plant.

    Science.gov (United States)

    Neuhaus, H; Kremers, H; Karrer, T; Traut, R H

    1998-02-01

    The validation procedure for the process control system of a plant for the large scale production of human albumin from plasma fractions is described. A validation master plan is developed, defining the system and elements to be validated, the interfaces with other systems with the validation limits, a general validation concept and supporting documentation. Based on this master plan, the validation protocols are developed. For the validation, the system is subdivided into a field level, which is the equipment part, and an automation level. The automation level is further subdivided into sections according to the different software modules. Based on a risk categorization of the modules, the qualification activities are defined. The test scripts for the different qualification levels (installation, operational and performance qualification) are developed according to a previously performed risk analysis.

  2. Image processing by use of the digital cross-correlator

    International Nuclear Information System (INIS)

    Katou, Yoshinori

    1982-01-01

    We manufactured for trial an instrument which achieved the image processing using digital correlators. A digital correlator perform 64-bit parallel correlation at 20 MH. The output of a digital correlator is a 7-bit word representing. An A-D converter is used to quantize it a precision of six bits. The resulting 6-bit word is fed to six correlators, wired in parallel. The image processing achieved in 12 bits, whose digital outputs converted an analog signal by a D-A converter. This instrument is named the digital cross-correlator. The method which was used in the image processing system calculated the convolution with the digital correlator. It makes various digital filters. In the experiment with the image processing video signals from TV camera were used. The digital image processing time was approximately 5 μs. The contrast was enhanced and smoothed. The digital cross-correlator has the image processing of 16 sorts, and was produced inexpensively. (author)

  3. Dielectric barrier discharge image processing by Photoshop

    Science.gov (United States)

    Dong, Lifang; Li, Xuechen; Yin, Zengqian; Zhang, Qingli

    2001-09-01

    In this paper, the filamentary pattern of dielectric barrier discharge has been processed by using Photoshop, the coordinates of each filament can also be obtained. By using Photoshop two different ways have been used to analyze the spatial order of the pattern formation in dielectric barrier discharge. The results show that the distance of the neighbor filaments at U equals 14 kV and d equals 0.9 mm is about 1.8 mm. In the scope of the experimental error, the results from the two different methods are similar.

  4. Dynamic modeling and validation of a lignocellulosic enzymatic hydrolysis process

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Sin, Gürkan

    2013-01-01

    The enzymatic hydrolysis process is one of the key steps in second generation biofuel production. After being thermally pretreated, the lignocellulosic material is liquefied by enzymes prior to fermentation. The scope of this paper is to evaluate a dynamic model of the hydrolysis process...... on a demonstration scale reactor. The following novel features are included: the application of the Convection–Diffusion–Reaction equation to a hydrolysis reactor to assess transport and mixing effects; the extension of a competitive kinetic model with enzymatic pH dependency and hemicellulose hydrolysis......; a comprehensive pH model; and viscosity estimations during the course of reaction. The model is evaluated against real data extracted from a demonstration scale biorefinery throughout several days of operation. All measurements are within predictions uncertainty and, therefore, the model constitutes a valuable...

  5. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  6. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    Science.gov (United States)

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  7. Acceptance Probability (P a) Analysis for Process Validation Lifecycle Stages.

    Science.gov (United States)

    Alsmeyer, Daniel; Pazhayattil, Ajay; Chen, Shu; Munaretto, Francesco; Hye, Maksuda; Sanghvi, Pradeep

    2016-04-01

    This paper introduces an innovative statistical approach towards understanding how variation impacts the acceptance criteria of quality attributes. Because of more complex stage-wise acceptance criteria, traditional process capability measures are inadequate for general application in the pharmaceutical industry. The probability of acceptance concept provides a clear measure, derived from specific acceptance criteria for each quality attribute. In line with the 2011 FDA Guidance, this approach systematically evaluates data and scientifically establishes evidence that a process is capable of consistently delivering quality product. The probability of acceptance provides a direct and readily understandable indication of product risk. As with traditional capability indices, the acceptance probability approach assumes that underlying data distributions are normal. The computational solutions for dosage uniformity and dissolution acceptance criteria are readily applicable. For dosage uniformity, the expected AV range may be determined using the s lo and s hi values along with the worst case estimates of the mean. This approach permits a risk-based assessment of future batch performance of the critical quality attributes. The concept is also readily applicable to sterile/non sterile liquid dose products. Quality attributes such as deliverable volume and assay per spray have stage-wise acceptance that can be converted into an acceptance probability. Accepted statistical guidelines indicate processes with C pk > 1.33 as performing well within statistical control and those with C pk  1.33 is associated with a centered process that will statistically produce less than 63 defective units per million. This is equivalent to an acceptance probability of >99.99%.

  8. Reflective THz and MR imaging of burn wounds: a potential clinical validation of THz contrast mechanisms

    Science.gov (United States)

    Bajwa, Neha; Nowroozi, Bryan; Sung, Shijun; Garritano, James; Maccabi, Ashkan; Tewari, Priyamvada; Culjat, Martin; Singh, Rahul; Alger, Jeffry; Grundfest, Warren; Taylor, Zachary

    2012-10-01

    Terahertz (THz) imaging is an expanding area of research in the field of medical imaging due to its high sensitivity to changes in tissue water content. Previously reported in vivo rat studies demonstrate that spatially resolved hydration mapping with THz illumination can be used to rapidly and accurately detect fluid shifts following induction of burns and provide highly resolved spatial and temporal characterization of edematous tissue. THz imagery of partial and full thickness burn wounds acquired by our group correlate well with burn severity and suggest that hydration gradients are responsible for the observed contrast. This research aims to confirm the dominant contrast mechanism of THz burn imaging using a clinically accepted diagnostic method that relies on tissue water content for contrast generation to support the translation of this technology to clinical application. The hydration contrast sensing capabilities of magnetic resonance imaging (MRI), specifically T2 relaxation times and proton density values N(H), are well established and provide measures of mobile water content, lending MRI as a suitable method to validate hydration states of skin burns. This paper presents correlational studies performed with MR imaging of ex vivo porcine skin that confirm tissue hydration as the principal sensing mechanism in THz burn imaging. Insights from this preliminary research will be used to lay the groundwork for future, parallel MRI and THz imaging of in vivo rat models to further substantiate the clinical efficacy of reflective THz imaging in burn wound care.

  9. Accelerating cross-validation with total variation and its application to super-resolution imaging.

    Directory of Open Access Journals (Sweden)

    Tomoyuki Obuchi

    Full Text Available We develop an approximation formula for the cross-validation error (CVE of a sparse linear regression penalized by ℓ1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.

  10. Defects quantization in industrial radiographs by image processing

    International Nuclear Information System (INIS)

    Briand, F.Y.; Brillault, B.; Philipp, S.

    1988-01-01

    This paper refers to the industrial application of image processing using Non Destructive Testing by radiography. The various problems involved by the conception of a numerical tool are described. This tool intends to help radiograph experts to quantify defects and to follow up their evolution, using numerical techniques. The sequences of processings that achieve defect segmentation and quantization are detailed. They are based on the thorough knowledge of radiographs formation techniques. The process uses various methods of image analysis, including textural analysis and morphological mathematics. The interface between the final product and users will occur in an explicit language, using the terms of radiographic expertise without showing any processing details. The problem is thoroughly described: image formation, digitization, processings fitted to flaw morphology and finally product structure in progress. 12 refs [fr

  11. Video image processing for nuclear safeguards

    International Nuclear Information System (INIS)

    Rodriguez, C.A.; Howell, J.A.; Menlove, H.O.; Brislawn, C.M.; Bradley, J.N.; Chare, P.; Gorten, J.

    1995-01-01

    The field of nuclear safeguards has received increasing amounts of public attention since the events of the Iraq-UN conflict over Kuwait, the dismantlement of the former Soviet Union, and more recently, the North Korean resistance to nuclear facility inspections by the International Atomic Energy Agency (IAEA). The role of nuclear safeguards in these and other events relating to the world's nuclear material inventory is to assure safekeeping of these materials and to verify the inventory and use of nuclear materials as reported by states that have signed the nuclear Nonproliferation Treaty throughout the world. Nuclear safeguards are measures prescribed by domestic and international regulatory bodies such as DOE, NRC, IAEA, and EURATOM and implemented by the nuclear facility or the regulatory body. These measures include destructive and non destructive analysis of product materials/process by-products for materials control and accountancy purposes, physical protection for domestic safeguards, and containment and surveillance for international safeguards

  12. Cluster Validity Classification Approaches Based on Geometric Probability and Application in the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    LI Jian-Wei

    2014-08-01

    Full Text Available On the basis of the cluster validity function based on geometric probability in literature [1, 2], propose a cluster analysis method based on geometric probability to process large amount of data in rectangular area. The basic idea is top-down stepwise refinement, firstly categories then subcategories. On all clustering levels, use the cluster validity function based on geometric probability firstly, determine clusters and the gathering direction, then determine the center of clustering and the border of clusters. Through TM remote sensing image classification examples, compare with the supervision and unsupervised classification in ERDAS and the cluster analysis method based on geometric probability in two-dimensional square which is proposed in literature 2. Results show that the proposed method can significantly improve the classification accuracy.

  13. Image processing applied to automatic detection of defects during ultrasonic examination

    International Nuclear Information System (INIS)

    Moysan, J.

    1992-10-01

    This work is a study about image processing applied to ultrasonic BSCAN images which are obtained in the field of non destructive testing of weld. The goal is to define what image processing techniques can bring to ameliorate the exploitation of the data collected and, more precisely, what image processing can do to extract the meaningful echoes which enable to characterize and to size the defects. The report presents non destructive testing by ultrasounds in the nuclear field and it indicates specificities of the propagation of ultrasonic waves in austenitic weld. It gives a state of the art of the data processing applied to ultrasonic images in nondestructive evaluation. A new image analysis is then developed. It is based on a powerful tool, the co-occurrence matrix. This matrix enables to represent, in a whole representation, relations between amplitudes of couples of pixels. From the matrix analysis, a new complete and automatic method has been set down in order to define a threshold which separates echoes from noise. An automatic interpretation of the ultrasonic echoes is then possible. Complete validation has been done with standard pieces

  14. PyDBS: an automated image processing workflow for deep brain stimulation surgery.

    Science.gov (United States)

    D'Albis, Tiziano; Haegelen, Claire; Essert, Caroline; Fernández-Vidal, Sara; Lalys, Florent; Jannin, Pierre

    2015-02-01

    Deep brain stimulation (DBS) is a surgical procedure for treating motor-related neurological disorders. DBS clinical efficacy hinges on precise surgical planning and accurate electrode placement, which in turn call upon several image processing and visualization tasks, such as image registration, image segmentation, image fusion, and 3D visualization. These tasks are often performed by a heterogeneous set of software tools, which adopt differing formats and geometrical conventions and require patient-specific parameterization or interactive tuning. To overcome these issues, we introduce in this article PyDBS, a fully integrated and automated image processing workflow for DBS surgery. PyDBS consists of three image processing pipelines and three visualization modules assisting clinicians through the entire DBS surgical workflow, from the preoperative planning of electrode trajectories to the postoperative assessment of electrode placement. The system's robustness, speed, and accuracy were assessed by means of a retrospective validation, based on 92 clinical cases. The complete PyDBS workflow achieved satisfactory results in 92 % of tested cases, with a median processing time of 28 min per patient. The results obtained are compatible with the adoption of PyDBS in clinical practice.

  15. A software package for biomedical image processing and analysis

    International Nuclear Information System (INIS)

    Goncalves, J.G.M.; Mealha, O.

    1988-01-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developed using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an excellent tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail

  16. Digital Signal Processing for Medical Imaging Using Matlab

    CERN Document Server

    Gopi, E S

    2013-01-01

    This book describes medical imaging systems, such as X-ray, Computed tomography, MRI, etc. from the point of view of digital signal processing. Readers will see techniques applied to medical imaging such as Radon transformation, image reconstruction, image rendering, image enhancement and restoration, and more. This book also outlines the physics behind medical imaging required to understand the techniques being described. The presentation is designed to be accessible to beginners who are doing research in DSP for medical imaging. Matlab programs and illustrations are used wherever possible to reinforce the concepts being discussed.  ·         Acts as a “starter kit” for beginners doing research in DSP for medical imaging; ·         Uses Matlab programs and illustrations throughout to make content accessible, particularly with techniques such as Radon transformation and image rendering; ·         Includes discussion of the basic principles behind the various medical imaging tec...

  17. Integrating digital topology in image-processing libraries.

    Science.gov (United States)

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  18. Empiric validation of a process for behavior change.

    Science.gov (United States)

    Elliot, Diane L; Goldberg, Linn; MacKinnon, David P; Ranby, Krista W; Kuehl, Kerry S; Moe, Esther L

    2016-09-01

    Most behavior change trials focus on outcomes rather than deconstructing how those outcomes related to programmatic theoretical underpinnings and intervention components. In this report, the process of change is compared for three evidence-based programs' that shared theories, intervention elements and potential mediating variables. Each investigation was a randomized trial that assessed pre- and post- intervention variables using survey constructs with established reliability. Each also used mediation analyses to define relationships. The findings were combined using a pattern matching approach. Surprisingly, knowledge was a significant mediator in each program (a and b path effects [pbehavior change.

  19. Reducing the absorbed dose in analogue radiography of infant chest images by improving the image quality, using image processing techniques

    International Nuclear Information System (INIS)

    Karimian, A.; Yazdani, S.; Askari, M. A.

    2011-01-01

    Radiographic inspection is one of the most widely employed techniques for medical testing methods. Because of poor contrast and high un-sharpness of radiographic image quality in films, converting radiographs to a digital format and using further digital image processing is the best method of enhancing the image quality and assisting the interpreter in their evaluation. In this research work, radiographic films of 70 infant chest images with different sizes of defects were selected. To digitise the chest images and employ image processing the two algorithms (i) spatial domain and (ii) frequency domain techniques were used. The MATLAB environment was selected for processing in the digital format. Our results showed that by using these two techniques, the defects with small dimensions are detectable. Therefore, these suggested techniques may help medical specialists to diagnose the defects in the primary stages and help to prevent more repeat X-ray examination of paediatric patients. (authors)

  20. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    Science.gov (United States)

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  1. Process Modeling and Validation for Metal Big Area Additive Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Simunovic, Srdjan [ORNL; Nycz, Andrzej [ORNL; Noakes, Mark W. [ORNL; Chin, Charlie [Dassault Systemes; Oancea, Victor [Dassault Systemes

    2017-05-01

    Metal Big Area Additive Manufacturing (mBAAM) is a new additive manufacturing (AM) technology based on the metal arc welding. A continuously fed metal wire is melted by an electric arc that forms between the wire and the substrate, and deposited in the form of a bead of molten metal along the predetermined path. Objects are manufactured one layer at a time starting from the base plate. The final properties of the manufactured object are dependent on its geometry and the metal deposition path, in addition to depending on the basic welding process parameters. Computational modeling can be used to accelerate the development of the mBAAM technology as well as a design and optimization tool for the actual manufacturing process. We have developed a finite element method simulation framework for mBAAM using the new features of software ABAQUS. The computational simulation of material deposition with heat transfer is performed first, followed by the structural analysis based on the temperature history for predicting the final deformation and stress state. In this formulation, we assume that two physics phenomena are coupled in only one direction, i.e. the temperatures are driving the deformation and internal stresses, but their feedback on the temperatures is negligible. The experiment instrumentation (measurement types, sensor types, sensor locations, sensor placements, measurement intervals) and the measurements are presented. The temperatures and distortions from the simulations show good correlation with experimental measurements. Ongoing modeling work is also briefly discussed.

  2. Quantifying bedside-derived imaging of microcirculatory abnormalities in septic patients: a prospective validation study

    NARCIS (Netherlands)

    Boerma, E. Christiaan; Mathura, Keshen R.; van der Voort, Peter H. J.; Spronk, Peter E.; Ince, Can

    2005-01-01

    INTRODUCTION: The introduction of orthogonal polarization spectral (OPS) imaging in clinical research has elucidated new perspectives on the role of microcirculatory flow abnormalities in the pathogenesis of sepsis. Essential to the process of understanding and reproducing these abnormalities is the

  3. Multi-institutional Validation Study of Commercially Available Deformable Image Registration Software for Thoracic Images

    International Nuclear Information System (INIS)

    Kadoya, Noriyuki; Nakajima, Yujiro; Saito, Masahide; Miyabe, Yuki; Kurooka, Masahiko; Kito, Satoshi; Fujita, Yukio; Sasaki, Motoharu; Arai, Kazuhiro; Tani, Kensuke; Yagi, Masashi; Wakita, Akihisa; Tohyama, Naoki; Jingu, Keiichi

    2016-01-01

    Purpose: To assess the accuracy of the commercially available deformable image registration (DIR) software for thoracic images at multiple institutions. Methods and Materials: Thoracic 4-dimensional (4D) CT images of 10 patients with esophageal or lung cancer were used. Datasets for these patients were provided by DIR-lab ( (dir-lab.com)) and included a coordinate list of anatomic landmarks (300 bronchial bifurcations) that had been manually identified. Deformable image registration was performed between the peak-inhale and -exhale images. Deformable image registration error was determined by calculating the difference at each landmark point between the displacement calculated by DIR software and that calculated by the landmark. Results: Eleven institutions participated in this study: 4 used RayStation (RaySearch Laboratories, Stockholm, Sweden), 5 used MIM Software (Cleveland, OH), and 3 used Velocity (Varian Medical Systems, Palo Alto, CA). The ranges of the average absolute registration errors over all cases were as follows: 0.48 to 1.51 mm (right-left), 0.53 to 2.86 mm (anterior-posterior), 0.85 to 4.46 mm (superior-inferior), and 1.26 to 6.20 mm (3-dimensional). For each DIR software package, the average 3-dimensional registration error (range) was as follows: RayStation, 3.28 mm (1.26-3.91 mm); MIM Software, 3.29 mm (2.17-3.61 mm); and Velocity, 5.01 mm (4.02-6.20 mm). These results demonstrate that there was moderate variation among institutions, although the DIR software was the same. Conclusions: We evaluated the commercially available DIR software using thoracic 4D-CT images from multiple centers. Our results demonstrated that DIR accuracy differed among institutions because it was dependent on both the DIR software and procedure. Our results could be helpful for establishing prospective clinical trials and for the widespread use of DIR software. In addition, for clinical care, we should try to find the optimal DIR procedure using thoracic 4D

  4. Validation of phalanx bone three-dimensional surface segmentation from computed tomography images using laser scanning

    Energy Technology Data Exchange (ETDEWEB)

    DeVries, Nicole A.; Gassman, Esther E.; Kallemeyn, Nicole A. [The University of Iowa, Department of Biomedical Engineering, Center for Computer Aided Design, Iowa City, IA (United States); Shivanna, Kiran H. [The University of Iowa, Center for Computer Aided Design, Iowa City, IA (United States); Magnotta, Vincent A. [The University of Iowa, Department of Biomedical Engineering, Department of Radiology, Center for Computer Aided Design, Iowa City, IA (United States); Grosland, Nicole M. [The University of Iowa, Department of Biomedical Engineering, Department of Orthopaedics and Rehabilitation, Center for Computer Aided Design, Iowa City, IA (United States)

    2008-01-15

    To examine the validity of manually defined bony regions of interest from computed tomography (CT) scans. Segmentation measurements were performed on the coronal reformatted CT images of the three phalanx bones of the index finger from five cadaveric specimens. Two smoothing algorithms (image-based and Laplacian surface-based) were evaluated to determine their ability to represent accurately the anatomic surface. The resulting surfaces were compared with laser surface scans of the corresponding cadaveric specimen. The average relative overlap between two tracers was 0.91 for all bones. The overall mean difference between the manual unsmoothed surface and the laser surface scan was 0.20 mm. Both image-based and Laplacian surface-based smoothing were compared; the overall mean difference for image-based smoothing was 0.21 mm and 0.20 mm for Laplacian smoothing. This study showed that manual segmentation of high-contrast, coronal, reformatted, CT datasets can accurately represent the true surface geometry of bones. Additionally, smoothing techniques did not significantly alter the surface representations. This validation technique should be extended to other bones, image segmentation and spatial filtering techniques. (orig.)

  5. Validation of phalanx bone three-dimensional surface segmentation from computed tomography images using laser scanning

    International Nuclear Information System (INIS)

    DeVries, Nicole A.; Gassman, Esther E.; Kallemeyn, Nicole A.; Shivanna, Kiran H.; Magnotta, Vincent A.; Grosland, Nicole M.

    2008-01-01

    To examine the validity of manually defined bony regions of interest from computed tomography (CT) scans. Segmentation measurements were performed on the coronal reformatted CT images of the three phalanx bones of the index finger from five cadaveric specimens. Two smoothing algorithms (image-based and Laplacian surface-based) were evaluated to determine their ability to represent accurately the anatomic surface. The resulting surfaces were compared with laser surface scans of the corresponding cadaveric specimen. The average relative overlap between two tracers was 0.91 for all bones. The overall mean difference between the manual unsmoothed surface and the laser surface scan was 0.20 mm. Both image-based and Laplacian surface-based smoothing were compared; the overall mean difference for image-based smoothing was 0.21 mm and 0.20 mm for Laplacian smoothing. This study showed that manual segmentation of high-contrast, coronal, reformatted, CT datasets can accurately represent the true surface geometry of bones. Additionally, smoothing techniques did not significantly alter the surface representations. This validation technique should be extended to other bones, image segmentation and spatial filtering techniques. (orig.)

  6. Application of two-dimensional crystallography and image processing to atomic resolution Z-contrast images.

    Science.gov (United States)

    Morgan, David G; Ramasse, Quentin M; Browning, Nigel D

    2009-06-01

    Zone axis images recorded using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM or Z-contrast imaging) reveal the atomic structure with a resolution that is defined by the probe size of the microscope. In most cases, the full images contain many sub-images of the crystal unit cell and/or interface structure. Thanks to the repetitive nature of these images, it is possible to apply standard image processing techniques that have been developed for the electron crystallography of biological macromolecules and have been used widely in other fields of electron microscopy for both organic and inorganic materials. These methods can be used to enhance the signal-to-noise present in the original images, to remove distortions in the images that arise from either the instrumentation or the specimen itself and to quantify properties of the material in ways that are difficult without such data processing. In this paper, we describe briefly the theory behind these image processing techniques and demonstrate them for aberration-corrected, high-resolution HAADF-STEM images of Si(46) clathrates developed for hydrogen storage.

  7. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    Science.gov (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  8. EXPERIMENTAL VALIDATION OF CUMULATIVE SURFACE LOCATION ERROR FOR TURNING PROCESSES

    Directory of Open Access Journals (Sweden)

    Adam K. Kiss

    2016-02-01

    Full Text Available The aim of this study is to create a mechanical model which is suitable to investigate the surface quality in turning processes, based on the Cumulative Surface Location Error (CSLE, which describes the series of the consecutive Surface Location Errors (SLE in roughing operations. In the established model, the investigated CSLE depends on the currently and the previously resulted SLE by means of the variation of the width of cut. The phenomenon of the system can be described as an implicit discrete map. The stationary Surface Location Error and its bifurcations were analysed and flip-type bifurcation was observed for CSLE. Experimental verification of the theoretical results was carried out.

  9. Experimental Validation for Hot Stamping Process by Using Taguchi Method

    Science.gov (United States)

    Fawzi Zamri, Mohd; Lim, Syh Kai; Razlan Yusoff, Ahmad

    2016-02-01

    Due to the demand for reduction in gas emissions, energy saving and producing safer vehicles has driven the development of Ultra High Strength Steel (UHSS) material. To strengthen UHSS material such as boron steel, it needed to undergo a process of hot stamping for heating at certain temperature and time. In this paper, Taguchi method is applied to determine the appropriate parameter of thickness, heating temperature and heating time to achieve optimum strength of boron steel. The experiment is conducted by using flat square shape of hot stamping tool with tensile dog bone as a blank product. Then, the value of tensile strength and hardness is measured as response. The results showed that the lower thickness, higher heating temperature and heating time give the higher strength and hardness for the final product. In conclusion, boron steel blank are able to achieve up to 1200 MPa tensile strength and 650 HV of hardness.

  10. Defense Waste Processing Facility Canister Closure Weld Current Validation Testing

    Energy Technology Data Exchange (ETDEWEB)

    Korinko, P. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maxwell, D. N. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2018-01-29

    Two closure welds on filled Defense Waste Processing Facility (DWPF) canisters failed to be within the acceptance criteria in the DWPF operating procedure SW4-15.80-2.3 (1). In one case, the weld heat setting was inadvertently provided to the canister at the value used for test welds (i.e., 72%) and this oversight produced a weld at a current of nominally 210 kA compared to the operating procedure range (i.e., 82%) of 240 kA to 263 kA. The second weld appeared to experience an instrumentation and data acquisition upset. The current for this weld was reported as 191 kA. Review of the data from the Data Acquisition System (DAS) indicated that three of the four current legs were reading the expected values, approximately 62 kA each, and the fourth leg read zero current. Since there is no feasible way by further examination of the process data to ascertain if this weld was actually welded at either the target current or the lower current, a test plan was executed to provide assurance that these Nonconforming Welds (NCWs) meet the requirements for strength and leak tightness. Acceptance of the welds is based on evaluation of Test Nozzle Welds (TNW) made specifically for comparison. The TNW were nondestructively and destructively evaluated for plug height, heat tint, ultrasonic testing (UT) for bond length and ultrasonic volumetric examination for weld defects, burst pressure, fractography, and metallography. The testing was conducted in agreement with a Task Technical and Quality Assurance Plan (TTQAP) (2) and applicable procedures.

  11. Perceiving pain in others: validation of a dual processing model.

    Science.gov (United States)

    McCrystal, Kalie N; Craig, Kenneth D; Versloot, Judith; Fashler, Samantha R; Jones, Daniel N

    2011-05-01

    Accurate perception of another person's painful distress would appear to be accomplished through sensitivity to both automatic (unintentional, reflexive) and controlled (intentional, purposive) behavioural expression. We examined whether observers would construe diverse behavioural cues as falling within these domains, consistent with cognitive neuroscience findings describing activation of both automatic and controlled neuroregulatory processes. Using online survey methodology, 308 research participants rated behavioural cues as "goal directed vs. non-goal directed," "conscious vs. unconscious," "uncontrolled vs. controlled," "fast vs. slow," "intentional (deliberate) vs. unintentional," "stimulus driven (obligatory) vs. self driven," and "requiring contemplation vs. not requiring contemplation." The behavioural cues were the 39 items provided by the PROMIS pain behaviour bank, constructed to be representative of the diverse possibilities for pain expression. Inter-item correlations among rating scales provided evidence of sufficient internal consistency justifying a single score on an automatic/controlled dimension (excluding the inconsistent fast vs. slow scale). An initial exploratory factor analysis on 151 participant data sets yielded factors consistent with "controlled" and "automatic" actions, as well as behaviours characterized as "ambiguous." A confirmatory factor analysis using the remaining 151 data sets replicated EFA findings, supporting theoretical predictions that observers would distinguish immediate, reflexive, and spontaneous reactions (primarily facial expression and paralinguistic features of speech) from purposeful and controlled expression (verbal behaviour, instrumental behaviour requiring ongoing, integrated responses). There are implicit dispositions to organize cues signaling pain in others into the well-defined categories predicted by dual process theory. Copyright © 2011 International Association for the Study of Pain. Published by

  12. Automatic tissue image segmentation based on image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  13. QA/QC Reflected in ISO 11137; The Role of Dosimetry in the Validation Process

    International Nuclear Information System (INIS)

    Kovacs, A.

    2007-01-01

    Standardized dosimetry (ISO/ASTM standards) - as a tool of QC - has got key role for the validation of the sterilization and ford irradiation processes, as well as to control the radiation processing of polymer products. In radiation processing, validation and process control (e.g. sterilization, food irradiation) depend on the measurement of absorbed dose. These measurements shall be performed using a dosimetric system or systems having a known level of accuracy and precision (European standard EN552:1994). In presented lecture different aspects of the operational qualification during the radiation processing of polymer products are described

  14. Bio-inspired approach to multistage image processing

    Science.gov (United States)

    Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan

    2017-08-01

    Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.

  15. Automated characterisation of ultrasound images of ovarian tumours: the diagnostic accuracy of a support vector machine and image processing with a local binary pattern operator.

    Science.gov (United States)

    Khazendar, S; Sayasneh, A; Al-Assam, H; Du, H; Kaijser, J; Ferrara, L; Timmerman, D; Jassim, S; Bourne, T

    2015-01-01

    Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered.

  16. Innovative approach for in-vivo ablation validation on multimodal images

    Science.gov (United States)

    Shahin, O.; Karagkounis, G.; Carnegie, D.; Schlaefer, A.; Boctor, E.

    2014-03-01

    Radiofrequency ablation (RFA) is an important therapeutic procedure for small hepatic tumors. To make sure that the target tumor is effectively treated, RFA monitoring is essential. While several imaging modalities can observe the ablation procedure, it is not clear how ablated lesions on the images correspond to actual necroses. This uncertainty contributes to the high local recurrence rates (up to 55%) after radiofrequency ablative therapy. This study investigates a novel approach to correlate images of ablated lesions with actual necroses. We mapped both intraoperative images of the lesion and a slice through the actual necrosis in a common reference frame. An electromagnetic tracking system was used to accurately match lesion slices from different imaging modalities. To minimize the liver deformation effect, the tracking reference frame was defined inside the tissue by anchoring an electromagnetic sensor adjacent to the lesion. A validation test was performed using a phantom and proved that the end-to-end accuracy of the approach was within 2mm. In an in-vivo experiment, intraoperative magnetic resonance imaging (MRI) and ultrasound (US) ablation images were correlated to gross and histopathology. The results indicate that the proposed method can accurately correlate invivo ablations on different modalities. Ultimately, this will improve the interpretation of the ablation monitoring and reduce the recurrence rates associated with RFA.

  17. An overview of medical image processing methods | Maras | African ...

    African Journals Online (AJOL)

    Various standards were formed regarding these instruments and end products that are being used more frequently everyday. Personal computers (PCs) have reached a significant level in image processing, carried analysis and visualization processes which could be done with expensive hardware on doctors' desktops.

  18. Image processing system performance prediction and product quality evaluation

    Science.gov (United States)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  19. TEM validation of immunohistochemical staining prior to assessment of tumour angiogenesis by computerised image analysis

    International Nuclear Information System (INIS)

    Killingsworth, M.C.

    2002-01-01

    Full text: Counts of microvessel density (MVD) within solid tumours have been shown to be an independent predictor of outcome with higher counts generally associated with a worse prognosis. These assessments are commonly performed on immunoperoxidase stained (IPX) sections with antibodies to CD34, CD31 and Factor VIII-related antigen routinely used as vascular markers. Tumour vascular density is thought to reflect the demand the growing neoplasm is placing on its feeding blood supply. Vascular density also appears to be associated with spread of invasive cells to distant sites. The present study of tumour angiogenesis in prostate cancer specimens aims to assess new vessel growth in addition to MVD counts. The hypothesis being that an assessment which takes into account vascular migration and proliferation as well as the number of patent vessels present may have improved predictive power over assessments based on MVD counts alone. We are employing anti-CD34 stained IPX sections which are digitally photographed and assessed by a computerised image analysis system. Our aim is to develop parameters whereby tumour angiogenesis may be assessed at the light microscopic level and then correlated with existing histological methods of tumour assessment such as Gleason grading. In order to use IPX stained sections for angiogenic assessment validation and understanding of the anti-CD34 immunostaining pattern was necessary. This involved the following steps: i) Morphological assessment of angiogenic changes present in tumour blood vessels. Morphological changes in endothelial cells and pericytes indicative of angiogenic activation are generally below the level of resolution available with light microscopy. TEM examination revealed endothelial cell budding, pericyte retraction, basement membrane duplication and endothelial sprout formation in capillaries and venules surrounding tumour glands. This information assisted with the development of parameters by which IPX sections

  20. IDP: Image and data processing (software) in C++

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.