WorldWideScience

Sample records for program image processing

  1. SIP: A Web-Based Astronomical Image Processing Program

    Science.gov (United States)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  2. Geometric correction of radiographic images using general purpose image processing program

    International Nuclear Information System (INIS)

    Kim, Eun Kyung; Cheong, Ji Seong; Lee, Sang Hoon

    1994-01-01

    The present study was undertaken to compare geometric corrected image by general-purpose image processing program for the Apple Macintosh II computer (NIH Image, Adobe Photoshop) with standardized image by individualized custom fabricated alignment instrument. Two non-standardized periapical films with XCP film holder only were taken at the lower molar portion of 19 volunteers. Two standardized periapical films with customized XCP film holder with impression material on the bite-block were taken for each person. Geometric correction was performed with Adobe Photoshop and NIH Image program. Specially, arbitrary image rotation function of 'Adobe Photoshop' and subtraction with transparency function of 'NIH Image' were utilized. The standard deviations of grey values of subtracted images were used to measure image similarity. Average standard deviation of grey values of subtracted images if standardized group was slightly lower than that of corrected group. However, the difference was found to be statistically insignificant (p>0.05). It is considered that we can use 'NIH Image' and 'Adobe Photoshop' program for correction of nonstandardized film, taken with XCP film holder at lower molar portion.

  3. Managing complex processing of medical image sequences by program supervision techniques

    Science.gov (United States)

    Crubezy, Monica; Aubry, Florent; Moisan, Sabine; Chameroy, Virginie; Thonnat, Monique; Di Paola, Robert

    1997-05-01

    Our objective is to offer clinicians wider access to evolving medical image processing (MIP) techniques, crucial to improve assessment and quantification of physiological processes, but difficult to handle for non-specialists in MIP. Based on artificial intelligence techniques, our approach consists in the development of a knowledge-based program supervision system, automating the management of MIP libraries. It comprises a library of programs, a knowledge base capturing the expertise about programs and data and a supervision engine. It selects, organizes and executes the appropriate MIP programs given a goal to achieve and a data set, with dynamic feedback based on the results obtained. It also advises users in the development of new procedures chaining MIP programs.. We have experimented the approach for an application of factor analysis of medical image sequences as a means of predicting the response of osteosarcoma to chemotherapy, with both MRI and NM dynamic image sequences. As a result our program supervision system frees clinical end-users from performing tasks outside their competence, permitting them to concentrate on clinical issues. Therefore our approach enables a better exploitation of possibilities offered by MIP and higher quality results, both in terms of robustness and reliability.

  4. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  5. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  6. An Analysis of OpenACC Programming Model: Image Processing Algorithms as a Case Study

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2014-06-01

    Full Text Available Graphics processing units and similar accelerators have been intensively used in general purpose computations for several years. In the last decade, GPU architecture and organization changed dramatically to support an ever-increasing demand for computing power. Along with changes in hardware, novel programming models have been proposed, such as NVIDIA’s Compute Unified Device Architecture (CUDA and Open Computing Language (OpenCL by Khronos group. Although numerous commercial and scientific applications have been developed using these two models, they still impose a significant challenge for less experienced users. There are users from various scientific and engineering communities who would like to speed up their applications without the need to deeply understand a low-level programming model and underlying hardware. In 2011, OpenACC programming model was launched. Much like OpenMP for multicore processors, OpenACC is a high-level, directive-based programming model for manycore processors like GPUs. This paper presents an analysis of OpenACC programming model and its applicability in typical domains like image processing. Three, simple image processing algorithms have been implemented for execution on the GPU with OpenACC. The results were compared with their sequential counterparts, and results are briefly discussed.

  7. Algorithms and programming tools for image processing on the MPP:3

    Science.gov (United States)

    Reeves, Anthony P.

    1987-01-01

    This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.

  8. Algorithms and programming tools for image processing on the MPP, part 2

    Science.gov (United States)

    Reeves, Anthony P.

    1986-01-01

    A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.

  9. Graphical user interface for image acquisition and processing

    Science.gov (United States)

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  10. Automated synthesis of image processing procedures using AI planning techniques

    Science.gov (United States)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  11. Image exploitation and dissemination prototype of distributed image processing

    International Nuclear Information System (INIS)

    Batool, N.; Huqqani, A.A.; Mahmood, A.

    2003-05-01

    Image processing applications requirements can be best met by using the distributed environment. This report presents to draw inferences by utilizing the existed LAN resources under the distributed computing environment using Java and web technology for extensive processing to make it truly system independent. Although the environment has been tested using image processing applications, its design and architecture is truly general and modular so that it can be used for other applications as well, which require distributed processing. Images originating from server are fed to the workers along with the desired operations to be performed on them. The Server distributes the task among the Workers who carry out the required operations and send back the results. This application has been implemented using the Remote Method Invocation (RMl) feature of Java. Java RMI allows an object running in one Java Virtual Machine (JVM) to invoke methods on another JVM thus providing remote communication between programs written in the Java programming language. RMI can therefore be used to develop distributed applications [1]. We undertook this project to gain a better understanding of distributed systems concepts and its uses for resource hungry jobs. The image processing application is developed under this environment

  12. Eliminating "Hotspots" in Digital Image Processing

    Science.gov (United States)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  13. Processing computed tomography images by using personal computer

    International Nuclear Information System (INIS)

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.

    1994-01-01

    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  14. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  15. Image processing. Volumetric analysis with a digital image processing system. [GAMMA]. Bildverarbeitung. Volumetrie mittels eines digitalen Bildverarbeitungssystems

    Energy Technology Data Exchange (ETDEWEB)

    Kindler, M; Radtke, F; Demel, G

    1986-01-01

    The book is arranged in seven sections, describing various applications of volumetric analysis using image processing systems, and various methods of diagnostic evaluation of images obtained by gamma scintigraphy, cardic catheterisation, and echocardiography. A dynamic ventricular phantom is explained that has been developed for checking and calibration for safe examination of patient, the phantom allowing extensive simulation of volumetric and hemodynamic conditions of the human heart: One section discusses the program development for image processing, referring to a number of different computer systems. The equipment described includes a small non-expensive PC system, as well as a standardized nuclear medical diagnostic system, and a computer system especially suited to image processing.

  16. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  17. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  18. MO-DE-207-04: Imaging educational program on solutions to common pediatric imaging challenges

    International Nuclear Information System (INIS)

    Krishnamurthy, R.

    2015-01-01

    This imaging educational program will focus on solutions to common pediatric imaging challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. The educational program will begin with a detailed discussion of the optimal configuration of fluoroscopes for general pediatric procedures. Following this introduction will be a focused discussion on the utility of Dual Energy CT for imaging children. The third lecture will address the substantial challenge of obtaining consistent image post -processing in pediatric digital radiography. The fourth and final lecture will address best practices in pediatric MRI including a discussion of ancillary methods to reduce sedation and anesthesia rates. Learning Objectives: To learn techniques for optimizing radiation dose and image quality in pediatric fluoroscopy To become familiar with the unique challenges and applications of Dual Energy CT in pediatric imaging To learn solutions for consistent post-processing quality in pediatric digital radiography To understand the key components of an effective MRI safety and quality program for the pediatric practice

  19. MO-DE-207-04: Imaging educational program on solutions to common pediatric imaging challenges

    Energy Technology Data Exchange (ETDEWEB)

    Krishnamurthy, R. [Texas Children’s Hospital: Pediatric MRI Quality, Artifacts, and Safety (United States)

    2015-06-15

    This imaging educational program will focus on solutions to common pediatric imaging challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. The educational program will begin with a detailed discussion of the optimal configuration of fluoroscopes for general pediatric procedures. Following this introduction will be a focused discussion on the utility of Dual Energy CT for imaging children. The third lecture will address the substantial challenge of obtaining consistent image post -processing in pediatric digital radiography. The fourth and final lecture will address best practices in pediatric MRI including a discussion of ancillary methods to reduce sedation and anesthesia rates. Learning Objectives: To learn techniques for optimizing radiation dose and image quality in pediatric fluoroscopy To become familiar with the unique challenges and applications of Dual Energy CT in pediatric imaging To learn solutions for consistent post-processing quality in pediatric digital radiography To understand the key components of an effective MRI safety and quality program for the pediatric practice.

  20. An Applied Image Processing for Radiographic Testing

    International Nuclear Information System (INIS)

    Ratchason, Surasak; Tuammee, Sopida; Srisroal Anusara

    2005-10-01

    An applied image processing for radiographic testing (RT) is desirable because it decreases time-consuming, decreases the cost of inspection process that need the experienced workers, and improves the inspection quality. This paper presents the primary study of image processing for RT-films that is the welding-film. The proposed approach to determine the defects on weld-images. The BMP image-files are opened and developed by computer program that using Borland C ++ . The software has five main methods that are Histogram, Contrast Enhancement, Edge Detection, Image Segmentation and Image Restoration. Each the main method has the several sub method that are the selected options. The results showed that the effective software can detect defects and the varied method suit for the different radiographic images. Furthermore, improving images are better when two methods are incorporated

  1. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  2. PC image processing

    International Nuclear Information System (INIS)

    Hwa, Mok Jin Il; Am, Ha Jeng Ung

    1995-04-01

    This book starts summary of digital image processing and personal computer, and classification of personal computer image processing system, digital image processing, development of personal computer and image processing, image processing system, basic method of image processing such as color image processing and video processing, software and interface, computer graphics, video image and video processing application cases on image processing like satellite image processing, color transformation of image processing in high speed and portrait work system.

  3. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  4. Earth Observation Services (Image Processing Software)

    Science.gov (United States)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  5. STARL -- a Program to Correct CCD Image Defects

    Science.gov (United States)

    Narbutis, D.; Vanagas, R.; Vansevičius, V.

    We present a program tool, STARL, designed for automatic detection and correction of various defects in CCD images. It uses genetic algorithm for deblending and restoring of overlapping saturated stars in crowded stellar fields. Using Subaru Telescope Suprime-Cam images we demonstrate that the program can be implemented in the wide-field survey data processing pipelines for production of high quality color mosaics. The source code and examples are available at the STARL website.

  6. Design for embedded image processing on FPGAs

    CERN Document Server

    Bailey, Donald G

    2011-01-01

    "Introductory material will consider the problem of embedded image processing, and how some of the issues may be solved using parallel hardware solutions. Field programmable gate arrays (FPGAs) are introduced as a technology that provides flexible, fine-grained hardware that can readily exploit parallelism within many image processing algorithms. A brief review of FPGA programming languages provides the link between a software mindset normally associated with image processing algorithms, and the hardware mindset required for efficient utilization of a parallel hardware design. The bulk of the book will focus on the design process, and in particular how designing an FPGA implementation differs from a conventional software implementation. Particular attention is given to the techniques for mapping an algorithm onto an FPGA implementation, considering timing, memory bandwidth and resource constraints, and efficient hardware computational techniques. Extensive coverage will be given of a range of image processing...

  7. SPARX, a new environment for Cryo-EM image processing.

    Science.gov (United States)

    Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

    2007-01-01

    SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.

  8. Markov Processes in Image Processing

    Science.gov (United States)

    Petrov, E. P.; Kharina, N. L.

    2018-05-01

    Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.

  9. Image Processing Algorithms in the Secondary School Programming Education

    Science.gov (United States)

    Gerják, István

    2017-01-01

    Learning computer programming for students of the age of 14-18 is difficult and requires endurance and engagement. Being familiar with the syntax of a computer language and writing programs in it are challenges for youngsters, not to mention that understanding algorithms is also a big challenge. To help students in the learning process, teachers…

  10. Semi-automated camera trap image processing for the detection of ungulate fence crossing events.

    Science.gov (United States)

    Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija

    2017-09-27

    Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.

  11. The operation technology of realtime image processing system (Datacube)

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Lee, Yong Bum; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Park, Jin Seok

    1997-02-01

    In this project, a Sparc VME-based MaxSparc system, running the solaris operating environment, is selected as the dedicated image processing hardware for robot vision applications. In this report, the operation of Datacube maxSparc system, which is high performance realtime image processing hardware, is systematized. And image flow example programs for running MaxSparc system are studied and analyzed. The state-of-the-arts of Datacube system utilizations are studied and analyzed. For the next phase, advanced realtime image processing platform for robot vision application is going to be developed. (author). 19 refs., 71 figs., 11 tabs.

  12. Viewpoints on Medical Image Processing: From Science to Application

    Science.gov (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  13. Viewpoints on Medical Image Processing: From Science to Application.

    Science.gov (United States)

    Deserno Né Lehmann, Thomas M; Handels, Heinz; Maier-Hein Né Fritzsche, Klaus H; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-05-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.

  14. Crack detection using image processing

    International Nuclear Information System (INIS)

    Moustafa, M.A.A

    2010-01-01

    This thesis contains five main subjects in eight chapters and two appendices. The first subject discus Wiener filter for filtering images. In the second subject, we examine using different methods, as Steepest Descent Algorithm (SDA) and the Wavelet Transformation, to detect and filling the cracks, and it's applications in different areas as Nano technology and Bio-technology. In third subject, we attempt to find 3-D images from 1-D or 2-D images using texture mapping with Open Gl under Visual C ++ language programming. The fourth subject consists of the process of using the image warping methods for finding the depth of 2-D images using affine transformation, bilinear transformation, projective mapping, Mosaic warping and similarity transformation. More details about this subject will be discussed below. The fifth subject, the Bezier curves and surface, will be discussed in details. The methods for creating Bezier curves and surface with unknown distribution, using only control points. At the end of our discussion we will obtain the solid form, using the so called NURBS (Non-Uniform Rational B-Spline); which depends on: the degree of freedom, control points, knots, and an evaluation rule; and is defined as a mathematical representation of 3-D geometry that can accurately describe any shape from a simple 2-D line, circle, arc, or curve to the most complex 3-D organic free-form surface or (solid) which depends on finding the Bezier curve and creating family of curves (surface), then filling in between to obtain the solid form. Another representation for this subject is concerned with building 3D geometric models from physical objects using image-based techniques. The advantage of image techniques is that they require no expensive equipment; we use NURBS, subdivision surface and mesh for finding the depth of any image with one still view or 2D image. The quality of filtering depends on the way the data is incorporated into the model. The data should be treated with

  15. Image processing with ImageJ

    NARCIS (Netherlands)

    Abramoff, M.D.; Magalhães, Paulo J.; Ram, Sunanda J.

    2004-01-01

    Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS).

  16. The data recording and processing programs of the MB-9101 data processing and image representing system for special cardiologic examinations

    International Nuclear Information System (INIS)

    Billing, A.; Csernay, L.

    1982-01-01

    At present one of the most important areas of isotopic medical diagnostics is the heart. Using a scintillation gamma camera with computer aided data processing the modelling of a heart cycle from several hundred cycles through ECG gating becomes possible. The series of images produced in this way show the movement of the heart wall like a film. Thus the location and extent of the infarction can be determined. The equipment has been developed by Gamma Works, the program system has been prepared in the Isotope Laboratory of the Szeged Medical University. (author)

  17. A software package for biomedical image processing and analysis

    International Nuclear Information System (INIS)

    Goncalves, J.G.M.; Mealha, O.

    1988-01-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developed using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an excellent tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail

  18. The MicroAnalysis Toolkit: X-ray Fluorescence Image Processing Software

    International Nuclear Information System (INIS)

    Webb, S. M.

    2011-01-01

    The MicroAnalysis Toolkit is an analysis suite designed for the processing of x-ray fluorescence microprobe data. The program contains a wide variety of analysis tools, including image maps, correlation plots, simple image math, image filtering, multiple energy image fitting, semi-quantitative elemental analysis, x-ray fluorescence spectrum analysis, principle component analysis, and tomographic reconstructions. To be as widely useful as possible, data formats from many synchrotron sources can be read by the program with more formats available by request. An overview of the most common features will be presented.

  19. Digital Signal Processing for Medical Imaging Using Matlab

    CERN Document Server

    Gopi, E S

    2013-01-01

    This book describes medical imaging systems, such as X-ray, Computed tomography, MRI, etc. from the point of view of digital signal processing. Readers will see techniques applied to medical imaging such as Radon transformation, image reconstruction, image rendering, image enhancement and restoration, and more. This book also outlines the physics behind medical imaging required to understand the techniques being described. The presentation is designed to be accessible to beginners who are doing research in DSP for medical imaging. Matlab programs and illustrations are used wherever possible to reinforce the concepts being discussed.  ·         Acts as a “starter kit” for beginners doing research in DSP for medical imaging; ·         Uses Matlab programs and illustrations throughout to make content accessible, particularly with techniques such as Radon transformation and image rendering; ·         Includes discussion of the basic principles behind the various medical imaging tec...

  20. Data structures and languages in support of parallel image processing for astronomy

    International Nuclear Information System (INIS)

    Tanimoto, S.L.

    1985-01-01

    This paper discusses data structures, and aspects of programming languages and systems that are relevant to image processing of astronomy data. Emphasis is on image processing computations, because this kind of data processing is obviously a ripe one for parallelism and is important in astronomy. However, some discussion of general possibilities are also presented. The role of algorithms is examined since they are not dependent on a particular language. As an implementation of an algorithm a program is equally tied to data structure, operations, architecture and language, and therefore the issue of programming resides in the center of the tetrahedron

  1. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    Science.gov (United States)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  2. Software architecture for intelligent image processing using Prolog

    Science.gov (United States)

    Jones, Andrew C.; Batchelor, Bruce G.

    1994-10-01

    We describe a prototype system for interactive image processing using Prolog, implemented by the first author on an Apple Macintosh computer. This system is inspired by Prolog+, but differs from it in two particularly important respects. The first is that whereas Prolog+ assumes the availability of dedicated image processing hardware, with which the Prolog system communicates, our present system implements image processing functions in software using the C programming language. The second difference is that although our present system supports Prolog+ commands, these are implemented in terms of lower-level Prolog predicates which provide a more flexible approach to image manipulation. We discuss the impact of the Apple Macintosh operating system upon the implementation of the image-processing functions, and the interface between these functions and the Prolog system. We also explain how the Prolog+ commands have been implemented. The system described in this paper is a fairly early prototype, and we outline how we intend to develop the system, a task which is expedited by the extensible architecture we have implemented.

  3. A language for image processing HILLS and its supporting system SDIP

    International Nuclear Information System (INIS)

    Suzuki, H.; Toriwaki, J.

    1984-01-01

    This paper presents a language HILLS and its supporting system SDIP for image processing. HILLS is a key-word type language for describing image processing procedures by using subroutine packages SLIP and SPIDER. SDIP, written in FORTRAN to keep portability, supports programming by HILLS in interactive mode including functions such as editing, translating HILLS into FORTRAN, error detection, and providing manual information. Results of preliminary experiments suggest that HILLS and SDIP are very useful tools for beginners and researchers in application fields of image processing to develop desired image analysis procedures

  4. Optimisation in signal and image processing

    CERN Document Server

    Siarry, Patrick

    2010-01-01

    This book describes the optimization methods most commonly encountered in signal and image processing: artificial evolution and Parisian approach; wavelets and fractals; information criteria; training and quadratic programming; Bayesian formalism; probabilistic modeling; Markovian approach; hidden Markov models; and metaheuristics (genetic algorithms, ant colony algorithms, cross-entropy, particle swarm optimization, estimation of distribution algorithms, and artificial immune systems).

  5. Polarization information processing and software system design for simultaneously imaging polarimetry

    Science.gov (United States)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.

  6. Image processing system design for microcantilever-based optical readout infrared arrays

    Science.gov (United States)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  7. Open source software in a practical approach for post processing of radiologic images.

    Science.gov (United States)

    Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea

    2015-03-01

    The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.

  8. Acquisition and Post-Processing of Immunohistochemical Images.

    Science.gov (United States)

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  9. ORIS: the Oak Ridge Imaging System program listings

    International Nuclear Information System (INIS)

    Bell, P.R.; Dougherty, J.M.

    1978-04-01

    The Oak Ridge Imaging System (ORIS) is a general purpose access, storage, processing and display system for nuclear medicine imaging with rectilinear scanner and gamma camera. This volume contains listings of the PDP-8/E version of ORIS Version 2. The system is designed to run under the Digital Equipment Corporation's OS/8 monitor in 16K or more words of core. System and image file mass storage is on RK8E disk; longer-time image file storage is provided on DECtape. Another version of this program exists for use with the RF08 disk, and a more limited version is for DECtape only. This latter version is intended for non-medical imaging

  10. Iplt--image processing library and toolkit for the electron microscopy community.

    Science.gov (United States)

    Philippsen, Ansgar; Schenk, Andreas D; Stahlberg, Henning; Engel, Andreas

    2003-01-01

    We present the foundation for establishing a modular, collaborative, integrated, open-source architecture for image processing of electron microscopy images, named iplt. It is designed around object oriented paradigms and implemented using the programming languages C++ and Python. In many aspects it deviates from classical image processing approaches. This paper intends to motivate developers within the community to participate in this on-going project. The iplt homepage can be found at http://www.iplt.org.

  11. Image processing and recognition for biological images.

    Science.gov (United States)

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  12. Performance enhancement of various real-time image processing techniques via speculative execution

    Science.gov (United States)

    Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.

    1996-03-01

    In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.

  13. Parallel Processing of Images in Mobile Devices using BOINC

    Science.gov (United States)

    Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo

    2018-04-01

    Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  14. Parallel Processing of Images in Mobile Devices using BOINC

    Directory of Open Access Journals (Sweden)

    Curiel Mariela

    2018-04-01

    Full Text Available Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  15. The vectorization of a ray tracing program for image generation

    Science.gov (United States)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  16. Automatic Optimization of Hardware Accelerators for Image Processing

    OpenAIRE

    Reiche, Oliver; Häublein, Konrad; Reichenbach, Marc; Hannig, Frank; Teich, Jürgen; Fey, Dietmar

    2015-01-01

    In the domain of image processing, often real-time constraints are required. In particular, in safety-critical applications, such as X-ray computed tomography in medical imaging or advanced driver assistance systems in the automotive domain, timing is of utmost importance. A common approach to maintain real-time capabilities of compute-intensive applications is to offload those computations to dedicated accelerator hardware, such as Field Programmable Gate Arrays (FPGAs). Programming such arc...

  17. Methods in Astronomical Image Processing

    Science.gov (United States)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  18. IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM

    Science.gov (United States)

    Martin, M. D.

    1994-01-01

    The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the

  19. Procedure for physicist's scanning in the image processing system of bubble chambers

    International Nuclear Information System (INIS)

    Gritsaenko, I.A.; Petrovykh, L.P.; Petrovykh, Yu.L.; Fenyuk, A.B.

    1984-01-01

    The algorithm of the program of physicist's scanning for data processing from photo images in experiments using bubble chambers is described. The program allows one to perform sorting or selection of specific events for subsequent processing and identification of separate particles by bubble density along the track or by the character of the decay. The fraction of protons separated automatically constituted 97%. The program has been used for processing 50 thousand events at the BEBC chamber

  20. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier

    2013-01-01

    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  1. Artificial Intelligence In Processing A Sequence Of Time-Varying Images

    Science.gov (United States)

    Siler, W.; Tucker, D.; Buckley, J.; Hess, R. G.; Powell, V. G.

    1985-04-01

    A computer system is described for unsupervised analysis of five sets of ultrasound images of the heart. Each set consists of 24 frames taken at 33 millisecond intervals. The images are acquired in real time with computer control of the ultrasound apparatus. After acquisition the images are segmented by a sequence of image-processing programs; features are extracted and stored in a version of the Carnegie- Mellon Blackboard. Region classification is accomplished by a fuzzy logic expert system FLOPS based on OPS5. Preliminary results are given.

  2. Microcomputer-based image processing system for CT/MRI scans II

    International Nuclear Information System (INIS)

    Kwok, J.C.K.; Yu, P.K.N.; Cheng, A.Y.S.; Ho, W.C.

    1991-01-01

    This paper reports that a microcomputer-based image processing system is used to digitize and process serial sections of CT/MRI scan and reconstruct three-dimensional images of brain structures and brain lesions. The images grabbed also serve as templates and different vital regions with different risk values are also traced out for 3D reconstruction. A knowledge-based system employing rule-based programming has been built to help identifying brain lesions and to help planning trajectory for operations. The volumes of the lesions are also automatically determined. Such system is very useful for medical skills archival, tumor size monitoring, survival and outcome forecasting, and consistent neurosurgical planning

  3. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  4. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  5. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  6. Image Processing of Welding Procedure Specification and Pre-process program development for Finite Element Modelling

    International Nuclear Information System (INIS)

    Kim, K. S.; Lee, H. J.

    2009-11-01

    PRE-WELD program, which generates automatically the input file for the finite element analysis on the 2D butt welding at the dissimilar metal weld part, was developed. This program is pre-process program of the FEM code for analyzing the residual stress at the welding parts. Even if the users have not the detail knowledge for the FEM modelling, the users can make the ABAQUS INPUT easily by inputting the shape data of welding part, the weld current and voltage of welding parameters. By using PRE-WELD program, we can save the time and the effort greatly for preparing the ABAQUS INPUT for the residual stress analysis at the welding parts, and make the exact input without the human error

  7. A high performance image processing platform based on CPU-GPU heterogeneous cluster with parallel image reconstroctions for micro-CT

    International Nuclear Information System (INIS)

    Ding Yu; Qi Yujin; Zhang Xuezhu; Zhao Cuilan

    2011-01-01

    In this paper, we report the development of a high-performance image processing platform, which is based on CPU-GPU heterogeneous cluster. Currently, it consists of a Dell Precision T7500 and HP XW8600 workstations with parallel programming and runtime environment, using the message-passing interface (MPI) and CUDA (Compute Unified Device Architecture). We succeeded in developing parallel image processing techniques for 3D image reconstruction of X-ray micro-CT imaging. The results show that a GPU provides a computing efficiency of about 194 times faster than a single CPU, and the CPU-GPU clusters provides a computing efficiency of about 46 times faster than the CPU clusters. These meet the requirements of rapid 3D image reconstruction and real time image display. In conclusion, the use of CPU-GPU heterogeneous cluster is an effective way to build high-performance image processing platform. (authors)

  8. using fuzzy logic in image processing

    International Nuclear Information System (INIS)

    Ashabrawy, M.A.F.

    2002-01-01

    due to the unavoidable merge between computer and mathematics, the signal processing in general and the processing in particular have greatly improved and advanced. signal processing deals with the processing of any signal data for use by a computer, while image processing deals with all kinds of images (just images). image processing involves the manipulation of image data for better appearance and viewing by people; consequently, it is a rapidly growing and exciting field to be involved in today . this work takes an applications - oriented approach to image processing .the applications; the maps and documents of the first egyptian research reactor (ETRR-1), the x-ray medical images and the fingerprints image. since filters, generally, work continuous ranges rather than discrete values, fuzzy logic techniques are more convenient.thee techniques are powerful in image processing and can deal with one- dimensional, 1-D and two - dimensional images, 2-D images as well

  9. Imaging of jaw with dental CT software program: Normal Anatomy

    International Nuclear Information System (INIS)

    Kim, Myong Gon; Seo, Kwang Hee; Jung, Hak Young; Sung, Nak Kwan; Chung, Duk Soo; Kim, Ok Dong; Lee, Young Hwan

    1994-01-01

    Dental CT software program can provide reformatted cross-sectional and panoramic images that cannot be obtained with conventional axial and direct coronal CT scan. The purpose of this study is to describe the method of the technique and to identify the precise anatomy of jaw. We evaluated 13 mandibles and 7 maxillae of 15 subjects without bony disease who were being considered for endosseous dental implants. Reformatted images obtained by the use of bone algorithm performed on GE HiSpeed Advantage CT scanner were retrospectively reviewed for detailed anatomy of jaw. Anatomy related to neurovascular bundle(mandibular foramen, inferior alveolar canal, mental foramen, canal for incisive artery, nutrient canal, lingual foramen and mylohyoid groove), muscular insertion(mylohyoid line, superior and inferior genial tubercle and digastric fossa) and other anatomy(submandibular fossa, sublingual fossa, contour of alveolar process, oblique line, retromolar fossa, temporal crest and retromolar triangle) were well delineated in mandible. In maxilla, anatomy related to neurovascular bundle(greater palatine foramen and groove, nasopalatine canal and incisive foramen) and other anatomy(alveolar process, maxillary sinus and nasal fossa) were also well delineated. Reformatted images using dental CT software program provided excellent delineation of the jaw anatomy. Therefore, dental CT software program can play an important role in the preoperative assessment of mandible and maxilla for dental implants and other surgical conditions

  10. ImgLib2--generic image processing in Java.

    Science.gov (United States)

    Pietzsch, Tobias; Preibisch, Stephan; Tomancák, Pavel; Saalfeld, Stephan

    2012-11-15

    ImgLib2 is an open-source Java library for n-dimensional data representation and manipulation with focus on image processing. It aims at minimizing code duplication by cleanly separating pixel-algebra, data access and data representation in memory. Algorithms can be implemented for classes of pixel types and generic access patterns by which they become independent of the specific dimensionality, pixel type and data representation. ImgLib2 illustrates that an elegant high-level programming interface can be achieved without sacrificing performance. It provides efficient implementations of common data types, storage layouts and algorithms. It is the data model underlying ImageJ2, the KNIME Image Processing toolbox and an increasing number of Fiji-Plugins. ImgLib2 is licensed under BSD. Documentation and source code are available at http://imglib2.net and in a public repository at https://github.com/imagej/imglib. Supplementary data are available at Bioinformatics Online. saalfeld@mpi-cbg.de

  11. Using modern imaging techniques to old HST data: a summary of the ALICE program.

    Science.gov (United States)

    Choquet, Elodie; Soummer, Remi; Perrin, Marshall; Pueyo, Laurent; Hagan, James Brendan; Zimmerman, Neil; Debes, John Henry; Schneider, Glenn; Ren, Bin; Milli, Julien; Wolff, Schuyler; Stark, Chris; Mawet, Dimitri; Golimowski, David A.; Hines, Dean C.; Roberge, Aki; Serabyn, Eugene

    2018-01-01

    Direct imaging of extrasolar systems is a powerful technique to study the physical properties of exoplanetary systems and understand their formation and evolution mechanisms. The detection and characterization of these objects are challenged by their high contrast with their host star. Several observing strategies and post-processing algorithms have been developed for ground-based high-contrast imaging instruments, enabling the discovery of directly-imaged and spectrally-characterized exoplanets. The Hubble Space Telescope (HST), pioneer in directly imaging extrasolar systems, has yet been often limited to the detection of bright debris disks systems, with sensitivity limited by the difficulty to implement an optimal PSF subtraction stategy, which is readily offered on ground-based telescopes in pupil tracking mode.The Archival Legacy Investigations of Circumstellar Environments (ALICE) program is a consistent re-analysis of the 10 year old coronagraphic archive of HST's NICMOS infrared imager. Using post-processing methods developed for ground-based observations, we used the whole archive to calibrate PSF temporal variations and improve NICMOS's detection limits. We have now delivered ALICE-reprocessed science products for the whole NICMOS archival data back to the community. These science products, as well as the ALICE pipeline, were used to prototype the JWST coronagraphic data and reduction pipeline. The ALICE program has enabled the detection of 10 faint debris disk systems never imaged before in the near-infrared and several substellar companion candidates, which we are all in the process of characterizing through follow-up observations with both ground-based facilities and HST-STIS coronagraphy. In this publication, we provide a summary of the results of the ALICE program, advertise its science products and discuss the prospects of the program.

  12. Methods of digital image processing

    International Nuclear Information System (INIS)

    Doeler, W.

    1985-01-01

    Increasing use of computerized methods for diagnostical imaging of radiological problems will open up a wide field of applications for digital image processing. The requirements set by routine diagnostics in medical radiology point to picture data storage and documentation and communication as the main points of interest for application of digital image processing. As to the purely radiological problems, the value of digital image processing is to be sought in the improved interpretability of the image information in those cases where the expert's experience and image interpretation by human visual capacities do not suffice. There are many other domains of imaging in medical physics where digital image processing and evaluation is very useful. The paper reviews the various methods available for a variety of problem solutions, and explains the hardware available for the tasks discussed. (orig.) [de

  13. Stable image acquisition for mobile image processing applications

    Science.gov (United States)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  14. Neutron beam applications - A development of real-time imaging processing for neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whoi Yul; Lee, Sang Yup; Choi, Min Seok; Hwang, Sun Kyu; Han, Il Ho; Jang, Jae Young [Hanyang University, Seoul (Korea)

    1999-08-01

    This research is sponsored and supported by KAERI as a part of {sup A}pplication of Neutron Radiography Beam.{sup M}ain theme of the research is to develop a non-destructive inspection system for the task of studying the real-time behaviour of dynamic motion using neutron beam with the aid of a special purpose real-time image processing system that allows to capture an image of internal structure of a specimen. Currently, most off-the-shelf image processing programs designed for visible light or X-ray are not adequate for the applications that require neutron beam generated by the experimental nuclear reactor. In addition, study of dynamic motion of a specimen is severely constrained by such image processing systems. In this research, a special image processing system suited for such application is developed which not only supplements the commercial image processing system but allows to use neutron beam directly in the system for the study. 18 refs., 21 figs., 1 tab. (Author)

  15. Fast processing of foreign fiber images by image blocking

    OpenAIRE

    Yutao Wu; Daoliang Li; Zhenbo Li; Wenzhu Yang

    2014-01-01

    In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extra...

  16. IMAGE Programming Guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Stehfest, E; De Waal, L.

    2010-09-15

    This document describes the requirements and guidelines for the software of the IMAGE system. The motivation for this report was a substantial restructuring of the source code for IMAGE version 2.5. The requirements and guidelines relate to design considerations as well as to aspects of maintainability and portability. The design considerations determine guidelines about subjects, such as program structure, model hierarchy, the use of data modules, and the error message system. Maintainability and portability aspects determine the guidelines on, for example, the Fortran 90 standard, naming conventions, code lay-out, and internal documentation.

  17. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin

    2011-01-01

    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  18. Image perception and image processing

    International Nuclear Information System (INIS)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures. (orig.) [de

  19. Image perception and image processing

    Energy Technology Data Exchange (ETDEWEB)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures.

  20. Optoelectronic imaging of speckle using image processing method

    Science.gov (United States)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  1. Differential Spatio-temporal Multiband Satellite Image Clustering using K-means Optimization With Reinforcement Programming

    Directory of Open Access Journals (Sweden)

    Irene Erlyn Wina Rachmawan

    2015-06-01

    Full Text Available Deforestration is one of the crucial issues in Indonesia because now Indonesia has world's highest deforestation rate. In other hand, multispectral image delivers a great source of data for studying spatial and temporal changeability of the environmental such as deforestration area. This research present differential image processing methods for detecting nature change of deforestration. Our differential image processing algorithms extract and indicating area automatically. The feature of our proposed idea produce extracted information from multiband satellite image and calculate the area of deforestration by years with calculating data using temporal dataset. Yet, multiband satellite image consists of big data size that were difficult to be handled for segmentation. Commonly, K- Means clustering is considered to be a powerfull clustering algorithm because of its ability to clustering big data. However K-Means has sensitivity of its first generated centroids, which could lead into a bad performance. In this paper we propose a new approach to optimize K-Means clustering using Reinforcement Programming in order to clustering multispectral image. We build a new mechanism for generating initial centroids by implementing exploration and exploitation knowledge from Reinforcement Programming. This optimization will lead a better result for K-means data cluster. We select multispectral image from Landsat 7 in past ten years in Medawai, Borneo, Indonesia, and apply two segmentation areas consist of deforestration land and forest field. We made series of experiments and compared the experimental results of K-means using Reinforcement Programming as optimizing initiate centroid and normal K-means without optimization process. Keywords: Deforestration, Multispectral images, landsat, automatic clustering, K-means.

  2. Introduction to digital image processing

    CERN Document Server

    Pratt, William K

    2013-01-01

    CONTINUOUS IMAGE CHARACTERIZATION Continuous Image Mathematical Characterization Image RepresentationTwo-Dimensional SystemsTwo-Dimensional Fourier TransformImage Stochastic CharacterizationPsychophysical Vision Properties Light PerceptionEye PhysiologyVisual PhenomenaMonochrome Vision ModelColor Vision ModelPhotometry and ColorimetryPhotometryColor MatchingColorimetry ConceptsColor SpacesDIGITAL IMAGE CHARACTERIZATION Image Sampling and Reconstruction Image Sampling and Reconstruction ConceptsMonochrome Image Sampling SystemsMonochrome Image Reconstruction SystemsColor Image Sampling SystemsImage QuantizationScalar QuantizationProcessing Quantized VariablesMonochrome and Color Image QuantizationDISCRETE TWO-DIMENSIONAL LINEAR PROCESSING Discrete Image Mathematical Characterization Vector-Space Image RepresentationGeneralized Two-Dimensional Linear OperatorImage Statistical CharacterizationImage Probability Density ModelsLinear Operator Statistical RepresentationSuperposition and ConvolutionFinite-Area Superp...

  3. Mirion--a software package for automatic processing of mass spectrometric images.

    Science.gov (United States)

    Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B

    2013-08-01

    Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.

  4. Processing Chains for Desis and Enmap Imaging Spectroscopy Data: Similarities and Differences

    Science.gov (United States)

    Storch, T.; Müller, R.

    2017-10-01

    The Earth Observation Center (EOC) of the German Aerospace Center (DLR) realizes operational processors for DESIS (DLR Earth Sensing Imaging Spectrometer) and EnMAP (Environmental Mapping and Analysis Program) high-resolution imaging spectroscopy remote sensing satellite missions. DESIS is planned to be launched in 2018 and EnMAP in 2020. The developmental (namely schedule, deployment, and team) and functional (namely processing levels, algorithms in processors, and archiving approaches) similarities and differences of the fully-automatic processors are analyzed. The processing chains generate high-quality standardized image products for users at different levels taking characterization and calibration data into account. EOC has long lasting experiences with the airborne and spaceborne acquisition, processing, and analysis of hyperspectral image data. It turns out that both activities strongly benefit from each other.

  5. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R

    1996-01-01

    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  6. Image processing technology for nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Beom; Kim, Woong Ki; Park, Soon Young

    1993-05-01

    Digital image processing technique is being actively studied since microprocessors and semiconductor memory devices have been developed in 1960's. Now image processing board for personal computer as well as image processing system for workstation is developed and widely applied to medical science, military, remote inspection, and nuclear industry. Image processing technology which provides computer system with vision ability not only recognizes nonobvious information but processes large information and therefore this technique is applied to various fields like remote measurement, object recognition and decision in adverse environment, and analysis of X-ray penetration image in nuclear facilities. In this report, various applications of image processing to nuclear facilities are examined, and image processing techniques are also analysed with the view of proposing the ideas for future applications. (Author)

  7. ORIS: the Oak Ridge Imaging System program listings. [Nuclear medicine imaging with rectilinear scanner and gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Bell, P. R.; Dougherty, J. M.

    1978-04-01

    The Oak Ridge Imaging System (ORIS) is a general purpose access, storage, processing and display system for nuclear medicine imaging with rectilinear scanner and gamma camera. This volume contains listings of the PDP-8/E version of ORIS Version 2. The system is designed to run under the Digital Equipment Corporation's OS/8 monitor in 16K or more words of core. System and image file mass storage is on RK8E disk; longer-time image file storage is provided on DECtape. Another version of this program exists for use with the RF08 disk, and a more limited version is for DECtape only. This latter version is intended for non-medical imaging.

  8. [Imaging center - optimization of the imaging process].

    Science.gov (United States)

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  9. Gamma camera image processing and graphical analysis mutual software system

    International Nuclear Information System (INIS)

    Wang Zhiqian; Chen Yongming; Ding Ailian; Ling Zhiye; Jin Yongjie

    1992-01-01

    GCCS gamma camera image processing and graphical analysis system is a special mutual software system. It is mainly used to analyse various patient data acquired from gamma camera. This system is used on IBM PC, PC/XT or PC/AT. It consists of several parts: system management, data management, device management, program package and user programs. The system provides two kinds of user interfaces: command menu and command characters. It is easy to change and enlarge this system because it is best modularized. The user programs include almost all the clinical protocols used now

  10. Revealing the programming process

    DEFF Research Database (Denmark)

    Bennedsen, Jens; Caspersen, Michael Edelgaard

    2005-01-01

    One of the most important goals of an introductory programming course is that the students learn a systematic approach to the development of computer programs. Revealing the programming process is an important part of this; however, textbooks do not address the issue -- probably because...... the textbook medium is static and therefore ill-suited to expose the process of programming. We have found that process recordings in the form of captured narrated programming sessions are a simple, cheap, and efficient way of providing the revelation.We identify seven different elements of the programming...

  11. Processing a programming handbook for visual designers and artists

    CERN Document Server

    Reas, Casey

    2014-01-01

    The visual arts are rapidly changing as media moves into the web, mobile devices, and architecture. When designers and artists learn the basics of writing software, they develop a new form of literacy that enables them to create new media for the present, and to imagine future media that are beyond the capacities of current software tools. This book introduces this new literacy by teaching computer programming within the context of the visual arts. It offers a comprehensive reference and text for Processing (www.processing.org), an open-source programming language that can be used by students, artists, designers, architects, researchers, and anyone who wants to program images, animation, and interactivity. Written by Processing's cofounders, the book offers a definitive reference for students and professionals. Tutorial chapters make up the bulk of the book; advanced professional projects from such domains as animation, performance, and installation are discussed in interviews with their creators. This secon...

  12. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu

    2014-08-01

    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  13. Developing the Image and Public Reputation of Universities: The Managerial Process.

    Science.gov (United States)

    Davies, John L.; Melchiori, Gerlinda S.

    1982-01-01

    Managerial processes used in developing programs to improve an institution's public image are outlined, drawing on both theory and experience in college administration and public relations. Eight case studies provide illustrations. A five-stage managerial plan is presented. (MSE)

  14. Microscopic image processing system for measuring nonuniform film thickness profiles: Image scanning ellipsometry

    International Nuclear Information System (INIS)

    Liu, A.H.; Plawsky, J.L.; Wayner, P.C. Jr.

    1993-01-01

    The long-term objective of this research program is to determine the stability and heat transfer characteristics of evaporating thin films. The current objective is to develop and use a microscopic image-processing system (IPS) which has two parts: an image analyzing interferometer (IAI) and an image scanning ellipsometer (ISE). The primary purpose of this paper is to present the basic concept of ISE, which is a novel technique to measure the two dimensional thickness profile of a non-uniform, thin film, from several nm up to several μm, in a steady state as well as in a transient state. It is a full-field imaging technique which can study every point on the surface simultaneously with high spatial resolution and thickness sensitivity, i.e., it can measure and map the 2-D film thickness profile. The ISE was tested by measuring the thickness profile and the refractive index of a nonuniform solid film

  15. Motion-compensated processing of image signals

    NARCIS (Netherlands)

    2010-01-01

    In a motion-compensated processing of images, input images are down-scaled (scl) to obtain down-scaled images, the down-scaled images are subjected to motion- compensated processing (ME UPC) to obtain motion-compensated images, the motion- compensated images are up-scaled (sc2) to obtain up-scaled

  16. Medical image processing

    CERN Document Server

    Dougherty, Geoff

    2011-01-01

    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  17. Survey of using GPU CUDA programming model in medical image analysis

    Directory of Open Access Journals (Sweden)

    T. Kalaiselvi

    2017-01-01

    Full Text Available With the technology development of medical industry, processing data is expanding rapidly and computation time also increases due to many factors like 3D, 4D treatment planning, the increasing sophistication of MRI pulse sequences and the growing complexity of algorithms. Graphics processing unit (GPU addresses these problems and gives the solutions for using their features such as, high computation throughput, high memory bandwidth, support for floating-point arithmetic and low cost. Compute unified device architecture (CUDA is a popular GPU programming model introduced by NVIDIA for parallel computing. This review paper briefly discusses the need of GPU CUDA computing in the medical image analysis. The GPU performances of existing algorithms are analyzed and the computational gain is discussed. A few open issues, hardware configurations and optimization principles of existing methods are discussed. This survey concludes the few optimization techniques with the medical imaging algorithms on GPU. Finally, limitation and future scope of GPU programming are discussed.

  18. Image quality dependence on image processing software in ...

    African Journals Online (AJOL)

    Image quality dependence on image processing software in computed radiography. ... Agfa CR readers use MUSICA software, and an upgrade with significantly different image ... Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  19. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    Science.gov (United States)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  20. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    To demonstrate the importance of the image processing of fingerprint images prior to image enrolment or comparison, the set of fingerprint images in databases (a) and (b) of the FVC (Fingerprint Verification Competition) 2000 database were analyzed using a features extraction algorithm. This paper presents the results of ...

  1. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  2. Industrial Applications of Image Processing

    Science.gov (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  3. Image processing in radiology

    International Nuclear Information System (INIS)

    Dammann, F.

    2002-01-01

    Medical imaging processing and analysis methods have significantly improved during recent years and are now being increasingly used in clinical applications. Preprocessing algorithms are used to influence image contrast and noise. Three-dimensional visualization techniques including volume rendering and virtual endoscopy are increasingly available to evaluate sectional imaging data sets. Registration techniques have been developed to merge different examination modalities. Structures of interest can be extracted from the image data sets by various segmentation methods. Segmented structures are used for automated quantification analysis as well as for three-dimensional therapy planning, simulation and intervention guidance, including medical modelling, virtual reality environments, surgical robots and navigation systems. These newly developed methods require specialized skills for the production and postprocessing of radiological imaging data as well as new definitions of the roles of the traditional specialities. The aim of this article is to give an overview of the state-of-the-art of medical imaging processing methods, practical implications for the ragiologist's daily work and future aspects. (orig.) [de

  4. Microprocessor based image processing system

    International Nuclear Information System (INIS)

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  5. Image Processing: Some Challenging Problems

    Science.gov (United States)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  6. Digital signal and image processing using Matlab

    CERN Document Server

    Blanchet , Gérard

    2015-01-01

    The most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals, the theory being supported by exercises and computer simulations relating to real applications.   More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.  Following on from the first volume, this second installation takes a more practical stance, provi

  7. Digital signal and image processing using MATLAB

    CERN Document Server

    Blanchet , Gérard

    2014-01-01

    This fully revised and updated second edition presents the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications. More than 200 programs and functions are provided in the MATLABÒ language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject. This fully revised new edition updates : - the

  8. Signal and Image Processing Research at the Lawrence Livermore National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R S; Poyneer, L A; Kegelmeyer, L M; Carrano, C J; Chambers, D H; Candy, J V

    2009-06-29

    Lawrence Livermore National Laboratory is a large, multidisciplinary institution that conducts fundamental and applied research in the physical sciences. Research programs at the Laboratory run the gamut from theoretical investigations, to modeling and simulation, to validation through experiment. Over the years, the Laboratory has developed a substantial research component in the areas of signal and image processing to support these activities. This paper surveys some of the current research in signal and image processing at the Laboratory. Of necessity, the paper does not delve deeply into any one research area, but an extensive citation list is provided for further study of the topics presented.

  9. A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading

    Science.gov (United States)

    Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.

    2018-05-01

    Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.

  10. TECHNOLOGIES OF BRAIN IMAGES PROCESSING

    Directory of Open Access Journals (Sweden)

    O.M. Klyuchko

    2017-12-01

    Full Text Available The purpose of present research was to analyze modern methods of processing biological images implemented before storage in databases for biotechnological purposes. The databases further were incorporated into web-based digital systems. Examples of such information systems were described in the work for two levels of biological material organization; databases for storing data of histological analysis and of whole brain were described. Methods of neuroimaging processing for electronic brain atlas were considered. It was shown that certain pathological features can be revealed in histological image processing. Several medical diagnostic techniques (for certain brain pathologies, etc. as well as a few biotechnological methods are based on such effects. Algorithms of image processing were suggested. Electronic brain atlas was conveniently for professionals in different fields described in details. Approaches of brain atlas elaboration, “composite” scheme for large deformations as well as several methods of mathematic images processing were described as well.

  11. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... multiple imaging setups. This makes the system well suited for development of new processing methods and for clinical evaluations, where acquisition of the exact same scan location for multiple methods is important. The second project addressed implementation, development and evaluation of SASB using...... methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate...

  12. The EnMAP-Box—A Toolbox and Application Programming Interface for EnMAP Data Processing

    Directory of Open Access Journals (Sweden)

    Sebastian van der Linden

    2015-09-01

    Full Text Available The EnMAP-Box is a toolbox that is developed for the processing and analysis of data acquired by the German spaceborne imaging spectrometer EnMAP (Environmental Mapping and Analysis Program. It is developed with two aims in mind in order to guarantee full usage of future EnMAP data, i.e., (1 extending the EnMAP user community and (2 providing access to recent approaches for imaging spectroscopy data processing. The software is freely available and offers a range of tools and applications for the processing of spectral imagery, including classical processing tools for imaging spectroscopy data as well as powerful machine learning approaches or interfaces for the integration of methods available in scripting languages. A special developer version includes the full open source code, an application programming interface and an application wizard for easy integration and documentation of new developments. This paper gives an overview of the EnMAP-Box for users and developers, explains typical workflows along an application example and exemplifies the concept for making it a frequently used and constantly extended platform for imaging spectroscopy applications.

  13. Scientific Programs and Funding Opportunities at the National Institute of Biomedical Imaging and Bioengineering

    Science.gov (United States)

    Baird, Richard

    2006-03-01

    The mission of the National Institute of Biomedical Imaging and Bioengineering (NIBIB) is to improve human health by promoting the development and translation of emerging technologies in biomedical imaging and bioengineering. To this end, NIBIB supports a coordinated agenda of research programs in advanced imaging technologies and engineering methods that enable fundamental biomedical discoveries across a broad spectrum of biological processes, disorders, and diseases and have significant potential for direct medical application. These research programs dramatically advance the Nation's healthcare by improving the detection, management and, ultimately, the prevention of disease. The research promoted and supported by NIBIB also is strongly synergistic with other NIH Institutes and Centers as well as across government agencies. This presentation will provide an overview of the scientific programs and funding opportunities supported by NIBIB, highlighting those that are of particular important to the field of medical physics.

  14. A novel data processing technique for image reconstruction of penumbral imaging

    Science.gov (United States)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  15. Practical Approach for Hyperspectral Image Processing in Python

    Science.gov (United States)

    Annala, L.; Eskelinen, M. A.; Hämäläinen, J.; Riihinen, A.; Pölönen, I.

    2018-04-01

    Python is a very popular programming language among data scientists around the world. Python can also be used in hyperspectral data analysis. There are some toolboxes designed for spectral imaging, such as Spectral Python and HyperSpy, but there is a need for analysis pipeline, which is easy to use and agile for different solutions. We propose a Python pipeline which is built on packages xarray, Holoviews and scikit-learn. We have developed some of own tools, MaskAccessor, VisualisorAccessor and a spectral index library. They also fulfill our goal of easy and agile data processing. In this paper we will present our processing pipeline and demonstrate it in practice.

  16. Developing Quality Assurance Processes for Image-Guided Adaptive Radiation Therapy

    International Nuclear Information System (INIS)

    Yan Di

    2008-01-01

    Quality assurance has long been implemented in radiation treatment as systematic actions necessary to provide adequate confidence that the radiation oncology service will satisfy the given requirements for quality care. The existing reports from the American Association of Physicists in Medicine Task Groups 40 and 53 have provided highly detailed QA guidelines for conventional radiotherapy and treatment planning. However, advanced treatment processes recently developed with emerging high technology have introduced new QA requirements that have not been addressed previously in the conventional QA program. Therefore, it is necessary to expand the existing QA guidelines to also include new considerations. Image-guided adaptive radiation therapy (IGART) is a closed-loop treatment process that is designed to include the individual treatment information, such as patient-specific anatomic variation and delivered dose assessed during the therapy course in treatment evaluation and planning optimization. Clinical implementation of IGART requires high levels of automation in image acquisition, registration, segmentation, treatment dose construction, and adaptive planning optimization, which brings new challenges to the conventional QA program. In this article, clinical QA procedures for IGART are outlined. The discussion focuses on the dynamic or four-dimensional aspects of the IGART process, avoiding overlap with conventional QA guidelines

  17. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    International Nuclear Information System (INIS)

    Sensakovic, William F.; O'Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-01-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA"2 by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing

  18. Use of personal computer image for processing a magnetic resonance image (MRI)

    International Nuclear Information System (INIS)

    Yamamoto, Tetsuo; Tanaka, Hitoshi

    1988-01-01

    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  19. An experimental study of fluidization behavior using flow visualization and image processing

    International Nuclear Information System (INIS)

    Laan, Flavio T. van der; Sefidvash, Farhang; Cornelius, Vanderli

    2000-01-01

    A program of experimental study of fluidization of heavy spherical pellets with water using image processing technique has been started in the Nuclear Engineering Department of the Federal University of Rio Grande do Sul. Fluidization for application in nuclear reactors requires very detailed knowledge of its behavior as the reactivity is closely dependent on the porosity of the fluidized bed. A small modular nuclear reactor concept with suspended core is under study. A modified version of the reactor involves the choice of is to make conical the shape of the reactor core to produce a non-fluctuating bed and consequently guarantee the dynamic stability of the reactor. A 5 mm diameter steel ball are fluidized with water in a conical Plexiglass tube. A pump circulate the water in a loop feeding the room temperature water from the tank into the fluidization system and returning it back to the tank. A controllable valve controls the flow velocity. A high velocity digital CCD camera captures the images of the pellets moving in the fluidized tube. At different flow velocities, the individual pellets can be tracked by processing the sequential frames. A DVT digital tape record stores the images and by acquisition through interface board into a microcomputer. A special program process the data later on. Different algorithm of image treatment determines the velocity fields of the pellets. The behavior of the pellets under different flow velocity and porosity are carefully studied. (author)

  20. Introduction to computer image processing

    Science.gov (United States)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  1. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  2. Image processing of airborne geophysical data: a potential exploration tool for atomic minerals

    International Nuclear Information System (INIS)

    Shanti Kumar, C.; Bhairam, C.L.; Kak, S.N.; Achar, K.K.

    1993-01-01

    Data sets obtained from airborne gamma-ray spectrometric (AGRS) and aeromagnetic (AM) surveys, after necessary correction, are usually presented as profiles or as contour maps for interpretation in mineral exploration and geological analysis. Currently, imaging of the geophysical data sets have been extensively used as they have many advantages in their usage compared to conventional techniques. For the application of image processing techniques to the AGRS and AM data, software programs were customized for converting the digital data compatible to the satellite image processing system (SIPS). The geophysical data has been imaged and rectified to a poly conic projection, using cubic convolution resampling technique. While imaging, the radioelemental concentration values are rescaled to 256 grey levels. Software for the statistical information of radioelements and printing of coloured paper image have also been developed. Some of the image processing techniques used include, generation of colour composite images for preparing radioelemental (eU,eTh, and K) images and radioelemental colour composite images (K,eTh, eU) enabling display of a combined radioelemental distribution. Aeromagnetic data on the other hand are displayed in grey tone, pseudo-colours, and shaded relief images. Many other image enhancement techniques used for improving the display for further interpretation comprise, band ratioing, band combinations, filtering, look up table manipulation, and other similar functions. Advanced image processing techniques such as the principal component analysis (PCA) for understanding the geochemical and geological phenomena and the hue saturation and intensity (HSI) transformation for integration of radioelemental data with its corresponding satellite images facilitated display of radioelemental images draped over the satellite image. Statistics of radioelement and inter-elemental relationship has been obtained. The paper deals with the methodology adopted in the

  3. Volumetric image processing: A new technique for three-dimensional imaging

    International Nuclear Information System (INIS)

    Fishman, E.K.; Drebin, B.; Magid, D.; St Ville, J.A.; Zerhouni, E.A.; Siegelman, S.S.; Ney, D.R.

    1986-01-01

    Volumetric three-dimensional (3D) image processing was performed on CT scans of 25 normal hips, and image quality and potential diagnostic applications were assessed. In contrast to surface detection 3D techniques, volumetric processing preserves every pixel of transaxial CT data, replacing the gray scale with transparent ''gels'' and shading. Anatomically, accurate 3D images can be rotated and manipulated in real time, including simulated tissue layer ''peeling'' and mock surgery or disarticulation. This pilot study suggests that volumetric rendering is a major advance in signal processing of medical image data, producing a high quality, uniquely maneuverable image that is useful for fracture interpretation, soft-tissue analysis, surgical planning, and surgical rehearsal

  4. REMOTE SENSING IMAGE QUALITY ASSESSMENT EXPERIMENT WITH POST-PROCESSING

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2018-04-01

    Full Text Available This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  5. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  6. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)

    2016-10-15

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  7. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    Science.gov (United States)

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  8. Scilab and SIP for Image Processing

    OpenAIRE

    Fabbri, Ricardo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2012-01-01

    This paper is an overview of Image Processing and Analysis using Scilab, a free prototyping environment for numerical calculations similar to Matlab. We demonstrate the capabilities of SIP -- the Scilab Image Processing Toolbox -- which extends Scilab with many functions to read and write images in over 100 major file formats, including PNG, JPEG, BMP, and TIFF. It also provides routines for image filtering, edge detection, blurring, segmentation, shape analysis, and image recognition. Basic ...

  9. Enhancement of image contrast in linacgram through image processing

    International Nuclear Information System (INIS)

    Suh, Hyun Suk; Shin, Hyun Kyo; Lee, Re Na

    2000-01-01

    Conventional radiation therapy portal images gives low contrast images. The purpose of this study was to enhance image contrast of a linacgram by developing a low--cost image processing method. Chest linacgram was obtained by irradiating humanoid phantom and scanned using Diagnostic-Pro scanner for image processing. Several types of scan method were used in scanning. These include optical density scan, histogram equalized scan, linear histogram based scan, linear histogram independent scan, linear optical density scan, logarithmic scan, and power square root scan. The histogram distribution of the scanned images were plotted and the ranges of the gray scale were compared among various scan types. The scanned images were then transformed to the gray window by pallette fitting method and the contrast of the reprocessed portal images were evaluated for image improvement. Portal images of patients were also taken at various anatomic sites and the images were processed by Gray Scale Expansion (GSE) method. The patient images were analyzed to examine the feasibility of using the GSE technique in clinic. The histogram distribution showed that minimum and maximum gray scale ranges of 3192 and 21940 were obtained when the image was scanned using logarithmic method and square root method, respectively. Out of 256 gray scale, only 7 to 30% of the steps were used. After expanding the gray scale to full range, contrast of the portal images were improved. Experiment performed with patient image showed that improved identification of organs were achieved by GSE in portal images of knee joint, head and neck, lung, and pelvis. Phantom study demonstrated that the GSE technique improved image contrast of a linacgram. This indicates that the decrease in image quality resulting from the dual exposure, could be improved by expanding the gray scale. As a result, the improved technique will make it possible to compare the digitally reconstructed radiographs (DRR) and simulation image for

  10. Digital Data Processing of Images

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  11. Image processing in diabetic related causes

    CERN Document Server

    Kumar, Amit

    2016-01-01

    This book is a collection of all the experimental results and analysis carried out on medical images of diabetic related causes. The experimental investigations have been carried out on images starting from very basic image processing techniques such as image enhancement to sophisticated image segmentation methods. This book is intended to create an awareness on diabetes and its related causes and image processing methods used to detect and forecast in a very simple way. This book is useful to researchers, Engineers, Medical Doctors and Bioinformatics researchers.

  12. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika

    2009-01-01

    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  13. Processing of medical images

    International Nuclear Information System (INIS)

    Restrepo, A.

    1998-01-01

    Thanks to the innovations in the technology for the processing of medical images, to the high development of better and cheaper computers, and, additionally, to the advances in the systems of communications of medical images, the acquisition, storage and handling of digital images has acquired great importance in all the branches of the medicine. It is sought in this article to introduce some fundamental ideas of prosecution of digital images that include such aspects as their representation, storage, improvement, visualization and understanding

  14. Spot restoration for GPR image post-processing

    Science.gov (United States)

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  15. Intelligent medical image processing by simulated annealing

    International Nuclear Information System (INIS)

    Ohyama, Nagaaki

    1992-01-01

    Image processing is being widely used in the medical field and already has become very important, especially when used for image reconstruction purposes. In this paper, it is shown that image processing can be classified into 4 categories; passive, active, intelligent and visual image processing. These 4 classes are explained at first through the use of several examples. The results show that the passive image processing does not give better results than the others. Intelligent image processing, then, is addressed, and the simulated annealing method is introduced. Due to the flexibility of the simulated annealing, formulated intelligence is shown to be easily introduced in an image reconstruction problem. As a practical example, 3D blood vessel reconstruction from a small number of projections, which is insufficient for conventional method to give good reconstruction, is proposed, and computer simulation clearly shows the effectiveness of simulated annealing method. Prior to the conclusion, medical file systems such as IS and C (Image Save and Carry) is pointed out to have potential for formulating knowledge, which is indispensable for intelligent image processing. This paper concludes by summarizing the advantages of simulated annealing. (author)

  16. An instrument control and data analysis program for imaging and in vivo spectroscopy

    International Nuclear Information System (INIS)

    Roos, M.S.; Mushlin, R.A.; Veklerov, E.; Port, J.D.; Ladd, C.; Harrison, C.G.

    1987-01-01

    The spectrometer control data processing environment and the libraries of macros designed are used to support imaging and in vivo spectroscopy at the Lawrence Berkeley Laboratory, the Massachusetts Institute of Technology, and the Brigham and Women's Hospital in systems ranging from a 2-T animal spectrometer to a 0.5-T whole body imager. Experiments that have been successfully implemented include multi-slice, multi-echo imaging fast steady state free precession imaging and 31 P spectroscopy. The flexibility of the macro programming structure allowed very rapid development of these macro libraries. We have demonstrated that instrumentation developed around standard hosts, buses, and operating systems can yield research tools with performance comparable to highly specialized systems. The combination of low macro instruction overhead, provision of user access to system internals, and a rich command set controlling basic acquisition and processing functions provides a foundation on which libraries of macros may be built to serve a broad range of users, perhaps more easily than a system with larger sets of less primitive commands and a more limited batch processor. Well defined program interfaces for macros and for installing commands, as well as the ability to modify instrument control code significantly broaden the range of experiments accessible to the researcher

  17. Informatics in radiology (infoRAD): free DICOM image viewing and processing software for the Macintosh computer: what's available and what it can do for you.

    Science.gov (United States)

    Escott, Edward J; Rubinstein, David

    2004-01-01

    It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.

  18. Invitation to medical image processing

    International Nuclear Information System (INIS)

    Kitasaka, Takayuki; Suenaga, Yasuhito; Mori, Kensaku

    2010-01-01

    This medical essay explains the present state of CT image processing technology about its recognition, acquisition and visualization for computer-assisted diagnosis (CAD) and surgery (CAS), and future view. Medical image processing has a series of history of its original start from the discovery of X-ray to its application to diagnostic radiography, its combination with the computer for CT, multi-detector raw CT, leading to 3D/4D images for CAD and CAS. CAD is performed based on the recognition of normal anatomical structure of human body, detection of possible abnormal lesion and visualization of its numerical figure into image. Actual instances of CAD images are presented here for chest (lung cancer), abdomen (colorectal cancer) and future body atlas (models of organs and diseases for imaging), a recent national project: computer anatomy. CAS involves the surgical planning technology based on 3D images, navigation of the actual procedure and of endoscopy. As guidance to beginning technological image processing, described are the national and international community like related academic societies, regularly conducting congresses, textbooks and workshops, and topics in the field like computed anatomy of an individual patient for CAD and CAS, its data security and standardization. In future, protective medicine is in authors' view based on the imaging technology, e.g., daily life CAD of individuals ultimately, as exemplified in the present body thermometer and home sphygmometer, to monitor one's routine physical conditions. (T.T.)

  19. Differential morphology and image processing.

    Science.gov (United States)

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  20. PRACTICAL APPROACH FOR HYPERSPECTRAL IMAGE PROCESSING IN PYTHON

    Directory of Open Access Journals (Sweden)

    L. Annala

    2018-04-01

    Full Text Available Python is a very popular programming language among data scientists around the world. Python can also be used in hyperspectral data analysis. There are some toolboxes designed for spectral imaging, such as Spectral Python and HyperSpy, but there is a need for analysis pipeline, which is easy to use and agile for different solutions. We propose a Python pipeline which is built on packages xarray, Holoviews and scikit-learn. We have developed some of own tools, MaskAccessor, VisualisorAccessor and a spectral index library. They also fulfill our goal of easy and agile data processing. In this paper we will present our processing pipeline and demonstrate it in practice.

  1. iamxt: Max-tree toolbox for image processing and analysis

    Directory of Open Access Journals (Sweden)

    Roberto Souza

    2017-01-01

    Full Text Available The iamxt is an array-based max-tree toolbox implemented in Python using the NumPy library for array processing. It has state of the art methods for building and processing the max-tree, and a large set of visualization tools that allow to view the tree and the contents of its nodes. The array-based programming style and max-tree representation used in the toolbox make it simple to use. The intended audience of this toolbox includes mathematical morphology students and researchers that want to develop research in the field and image processing researchers that need a toolbox simple to use and easy to integrate in their applications.

  2. AN IMAGE ENHANCEMENT ENVIRONMENT DESIGNED AT 32-BIT VERSION OF VISUAL BASIC 4 PROGRAMMING LANGUAGE USING THE WIN32 API FUNCTIONS

    Directory of Open Access Journals (Sweden)

    Aydın KIZILKAYA

    2000-01-01

    Full Text Available In this paper, using the Win32 API (Application Programming Interface functions and MDI (Multiple Document Interface programming technique, which is main principle of Windows system, designed an image enhancement environment at 32-bit version of Visual Basic 4 programming language is investigated. Image enhancement algorithms could be easily applied in this environment and each of results obtained could be separately showed in frames on same environment. Image enhancement techniques used in this environment are observed in spatial domain. With this program observing image enhancement techniques are contrast stretching, histogram equalization, thresholding, negative imaging, low-pass filtering, high-pass filtering and median filtering. In the filtering process of the images are utilized of the convolution techniques at this environment.

  3. Selections from 2017: Image Processing with AstroImageJ

    Science.gov (United States)

    Kohler, Susanna

    2017-12-01

    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry

  4. Nuclear medicine imaging and data processing

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1978-01-01

    The Oak Ridge Imaging System (ORIS) is a software operating system structure around the Digital Equipment Corporation's PDP-8 minicomputer which provides a complete range of image manipulation procedures. Through its modular design it remains open-ended for easy expansion to meet future needs. Already included in the system are image access routines for use with the rectilinear scanner or gamma camera (both static and flow studies); display hardware design and corresponding software; archival storage provisions; and, most important, many image processing techniques. The image processing capabilities include image defect removal, smoothing, nonlinear bounding, preparation of functional images, and transaxial emission tomography reconstruction from a limited number of views

  5. Tracker: Image-Processing and Object-Tracking System Developed

    Science.gov (United States)

    Klimek, Robert B.; Wright, Theodore W.

    1999-01-01

    Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in

  6. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    Completely self-contained-and heavily illustrated-this introduction to basic concepts and methodologies for digital image processing is written at a level that truly is suitable for seniors and first...

  7. Predictive images of postoperative levator resection outcome using image processing software

    Directory of Open Access Journals (Sweden)

    Mawatari Y

    2016-09-01

    Full Text Available Yuki Mawatari,1 Mikiko Fukushima2 1Igo Ophthalmic Clinic, Kagoshima, 2Department of Ophthalmology, Faculty of Life Science, Kumamoto University, Chuo-ku, Kumamoto, Japan Purpose: This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection.Methods: Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection. Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®. Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery.Results: Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2% were satisfied with their postoperative appearances, and 55 patients (84.8% positively responded to the usefulness of processed images to predict postoperative appearance.Conclusion: Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. Keywords: levator resection, blepharoptosis, image processing, Adobe Photoshop® 

  8. Predictive images of postoperative levator resection outcome using image processing software.

    Science.gov (United States)

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  9. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang

    2014-01-01

    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  10. Image processing for medical diagnosis using CNN

    International Nuclear Information System (INIS)

    Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

    2003-01-01

    Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images

  11. Process control program development

    International Nuclear Information System (INIS)

    Dameron, H.J.

    1985-01-01

    This paper details the development and implementation of a ''Process Control Program'' at Duke Power's three nuclear stations - Oconee, McGuire, and Catawba. Each station is required by Technical Specification to have a ''Process Control Program'' (PCP) to control all dewatering and/or solidification activities for radioactive wastes

  12. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  13. Analysis of the Education Program Approval Process: A Program Evaluation.

    Science.gov (United States)

    Fountaine, Charles A.; And Others

    A study of the education program approval process involving the Veterans Administration (VA) and the State Approving Agencies (SAAs) had the following objectives: to describe the present education program approval process; to determine time and costs associated with the education program approval process; to describe the approval process at…

  14. Process perspective on image quality evaluation

    Science.gov (United States)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  15. Digital processing of radiographic images

    Science.gov (United States)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  16. A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children

    Science.gov (United States)

    Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

    2010-02-01

    A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

  17. Advantages and Disadvantages in Image Processing with Free Software in Radiology.

    Science.gov (United States)

    Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan

    2018-01-15

    Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.

  18. FITS Liberator: Image processing software

    Science.gov (United States)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  19. Organization of bubble chamber image processing

    International Nuclear Information System (INIS)

    Gritsaenko, I.A.; Petrovykh, L.P.; Petrovykh, Yu.L.; Fenyuk, A.B.

    1985-01-01

    A programme of bubble chamber image processing is described. The programme is written in FORTRAN, it is developed for the DEC-10 computer and is designed for operation of semi-automation processing-measurement projects PUOS-2 and PUOS-4. Fornalization of the image processing permits to use it for different physical experiments

  20. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    Science.gov (United States)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  1. Quantitative image processing in fluid mechanics

    Science.gov (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  2. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    International Nuclear Information System (INIS)

    Chai, X; Liu, L; Xing, L

    2014-01-01

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay information to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web

  3. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chai, X; Liu, L; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States)

    2014-06-01

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay information to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web

  4. IJ-OpenCV: Combining ImageJ and OpenCV for processing images in biomedicine.

    Science.gov (United States)

    Domínguez, César; Heras, Jónathan; Pascual, Vico

    2017-05-01

    The effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library. Based on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library. We have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest. The IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Real time 3D structural and Doppler OCT imaging on graphics processing units

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  6. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    ...-year graduate students in almost any technical discipline. The leading textbook in its field for more than twenty years, it continues its cutting-edge focus on contemporary developments in all mainstream areas of image processing-e.g...

  7. An instrument control and data analysis program for NMR imaging and spectroscopy

    International Nuclear Information System (INIS)

    Roos, M.S.; Mushlin, R.A.; Veklerov, E.; Port, J.D.; Ladd, C.; Harrison, C.G.

    1988-01-01

    We describe a software environment created to support real-time instrument control and signal acquisition as well as array-processor based signal and image processing in up to five dimensions. The environment is configured for NMR imaging and in vivo spectroscopy. It is designed to provide flexible tools for implementing novel NMR experiments in the research laboratory. Data acquisition and processing operations are programmed in macros which are loaded in assembled from to minimize instruction overhead. Data arrays are dynamically allocated for efficient use of memory and can be mapped directly into disk files. The command set includes primitives for real-time control of data acquisition, scalar arithmetic, string manipulation, branching, a file system and vector operations carried out by an array processor. 6 figs

  8. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo

    2016-01-01

    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  9. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing

    2008-01-01

    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  10. On some applications of diffusion processes for image processing

    International Nuclear Information System (INIS)

    Morfu, S.

    2009-01-01

    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  11. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  12. Trends in medical image processing

    International Nuclear Information System (INIS)

    Robilotta, C.C.

    1987-01-01

    The function of medical image processing is analysed, mentioning the developments, the physical agents, and the main categories, as conection of distortion in image formation, detectability increase, parameters quantification, etc. (C.G.C.) [pt

  13. An electronic image processing device featuring continuously selectable two-dimensional bipolar filter functions and real-time operation

    International Nuclear Information System (INIS)

    Charleston, B.D.; Beckman, F.H.; Franco, M.J.; Charleston, D.B.

    1981-01-01

    A versatile electronic-analogue image processing system has been developed for use in improving the quality of various types of images with emphasis on those encountered in experimental and diagnostic medicine. The operational principle utilizes spatial filtering which selectively controls the contrast of an image according to the spatial frequency content of relevant and non-relevant features of the image. Noise can be reduced or eliminated by selectively lowering the contrast of information in the high spatial frequency range. Edge sharpness can be enhanced by accentuating the upper midrange spatial frequencies. Both methods of spatial frequency control may be adjusted continuously in the same image to obtain maximum visibility of the features of interest. A precision video camera is used to view medical diagnostic images, either prints, transparencies or CRT displays. The output of the camera provides the analogue input signal for both the electronic processing system and the video display of the unprocessed image. The video signal input to the electronic processing system is processed by a two-dimensional spatial convolution operation. The system employs charged-coupled devices (CCDs), both tapped analogue delay lines (TADs) and serial analogue delay lines (SADs), to store information in the form of analogue potentials which are constantly being updated as new sampled analogue data arrive at the input. This information is convolved with a programmed bipolar radially symmetrical hexagonal function which may be controlled and varied at each radius by the operator in real-time by adjusting a set of front panel controls or by a programmed microprocessor control. Two TV monitors are used, one for processed image display and the other for constant reference to the original image. The working prototype has a full-screen display matrix size of 200 picture elements per horizontal line by 240 lines. The matrix can be expanded vertically and horizontally for the

  14. Application of Java technology in radiation image processing

    International Nuclear Information System (INIS)

    Cheng Weifeng; Li Zheng; Chen Zhiqiang; Zhang Li; Gao Wenhuan

    2002-01-01

    The acquisition and processing of radiation image plays an important role in modern application of civil nuclear technology. The author analyzes the rationale of Java image processing technology which includes Java AWT, Java 2D and JAI. In order to demonstrate applicability of Java technology in field of image processing, examples of application of JAI technology in processing of radiation images of large container have been given

  15. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  16. Fundus Autofluorescence Imaging in an Ocular Screening Program

    Directory of Open Access Journals (Sweden)

    A. M. Kolomeyer

    2012-01-01

    Full Text Available Purpose. To describe integration of fundus autofluorescence (FAF imaging into an ocular screening program. Methods. Fifty consecutive screening participants were included in this prospective pilot imaging study. Color and FAF (530/640 nm exciter/barrier filters images were obtained with a 15.1MP Canon nonmydriatic hybrid camera. A clinician evaluated the images on site to determine need for referral. Visual acuity (VA, intraocular pressure (IOP, and ocular pathology detected by color fundus and FAF imaging modalities were recorded. Results. Mean ± SD age was 47.4 ± 17.3 years. Fifty-two percent were female and 58% African American. Twenty-seven percent had a comprehensive ocular examination within the past year. Mean VA was 20/39 in the right eye and 20/40 in the left eye. Mean IOP was 15 mmHg bilaterally. Positive color and/or FAF findings were identified in nine (18% individuals with diabetic retinopathy or macular edema (n=4, focal RPE defects (n=2, age-related macular degeneration (n=1, central serous retinopathy (n=1, and ocular trauma (n=1. Conclusions. FAF was successfully integrated in our ocular screening program and aided in the identification of ocular pathology. Larger studies examining the utility of this technology in screening programs may be warranted.

  17. Fundus autofluorescence imaging in an ocular screening program.

    Science.gov (United States)

    Kolomeyer, A M; Nayak, N V; Szirth, B C; Khouri, A S

    2012-01-01

    Purpose. To describe integration of fundus autofluorescence (FAF) imaging into an ocular screening program. Methods. Fifty consecutive screening participants were included in this prospective pilot imaging study. Color and FAF (530/640 nm exciter/barrier filters) images were obtained with a 15.1MP Canon nonmydriatic hybrid camera. A clinician evaluated the images on site to determine need for referral. Visual acuity (VA), intraocular pressure (IOP), and ocular pathology detected by color fundus and FAF imaging modalities were recorded. Results. Mean ± SD age was 47.4 ± 17.3 years. Fifty-two percent were female and 58% African American. Twenty-seven percent had a comprehensive ocular examination within the past year. Mean VA was 20/39 in the right eye and 20/40 in the left eye. Mean IOP was 15 mmHg bilaterally. Positive color and/or FAF findings were identified in nine (18%) individuals with diabetic retinopathy or macular edema (n = 4), focal RPE defects (n = 2), age-related macular degeneration (n = 1), central serous retinopathy (n = 1), and ocular trauma (n = 1). Conclusions. FAF was successfully integrated in our ocular screening program and aided in the identification of ocular pathology. Larger studies examining the utility of this technology in screening programs may be warranted.

  18. A practical link between medical and computer groups in image data processing

    Energy Technology Data Exchange (ETDEWEB)

    Ollivier, J Y

    1975-01-01

    An acquisition and processing system of scintigraphic images should not be exclusively constructed for a computer specialist. Primarily it should be designed to be easily and quickly handled by a nurse or a doctor and be programmed by the doctor or the computer specialist. This consideration led Intertechnique to construct the CINE 200 system. In fact, the CINE 200 includes a computer and so offers the programming possibilities which are the tools of the computer specialist, even more it was conceived especially for clinic use and offers some functions which cannot be carried out by classical computer and some standard peripherals. In addition, the CINE 200 allows the doctor who is not a computer specialist to familiarize himself with this science by the progressive levels of language, the first level being a link of simple processing on images or curves, the second being an interpretative language identical to BASIC, very easy to learn. Before showing the offered facilities for the doctor and the computer specialist by the CINE 200, its characteristics are briefly reviewed.

  19. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Image processing in 60Co container inspection system

    International Nuclear Information System (INIS)

    Wu Zhifang; Zhou Liye; Wang Liqiang; Liu Ximing

    1999-01-01

    The authors analyzes the features of 60 Co container inspection image, the design of several special processing methods for container image and some normal processing methods for two-dimensional digital image, including gray enhancement, pseudo-enhancement, space filter, edge enhancement, geometry process, etc. It gives out the way to carry out the above mentioned process in Windows 95 or Win NT. It discusses some ways to improve the image processing speed on microcomputer and good results were obtained

  1. Application of image processing methods to industrial radiography

    International Nuclear Information System (INIS)

    Goutte, R.; Odet, C.; Tuncer, T.; Bodson, F.; Varcin, E.

    1985-01-01

    This study was carried out with the financial support of the Commission of the European Communities as part of the CECA research program comprising of IRSID, INSA de Lyon and the Framatome and Creusot Loire companies. Its purpose was to evaluate the possibility of using digital enhancement of radiographic images to improve defect visibility in industrial radiography, thereby providing assistance in defect detection and a method for automatic analysis of radiographs. This paper provides full results obtained from work on digital processing of radiographs showing real and artificial defects. Furthermore, work on simulated automatic defect detection is also presented. 2 refs

  2. Crack Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal, Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better than that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  3. How Digital Image Processing Became Really Easy

    Science.gov (United States)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  4. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  5. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel

    2014-12-01

    Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.

  6. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail: M.H.Yap@lboro.ac.uk; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2010-03-15

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  7. Processed images in human perception: A case study in ultrasound breast imaging

    International Nuclear Information System (INIS)

    Yap, Moi Hoon; Edirisinghe, Eran; Bez, Helmut

    2010-01-01

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  8. Image processing methods and architectures in diagnostic pathology.

    Directory of Open Access Journals (Sweden)

    Oscar DĂŠniz

    2010-05-01

    Full Text Available Grid technology has enabled the clustering and the efficient and secure access to and interaction among a wide variety of geographically distributed resources such as: supercomputers, storage systems, data sources, instruments and special devices and services. Their main applications include large-scale computational and data intensive problems in science and engineering. General grid structures and methodologies for both software and hardware in image analysis for virtual tissue-based diagnosis has been considered in this paper. This methods are focus on the user level middleware. The article describes the distributed programming system developed by the authors for virtual slide analysis in diagnostic pathology. The system supports different image analysis operations commonly done in anatomical pathology and it takes into account secured aspects and specialized infrastructures with high level services designed to meet application requirements. Grids are likely to have a deep impact on health related applications, and therefore they seem to be suitable for tissue-based diagnosis too. The implemented system is a joint application that mixes both Web and Grid Service Architecture around a distributed architecture for image processing. It has shown to be a successful solution to analyze a big and heterogeneous group of histological images under architecture of massively parallel processors using message passing and non-shared memory.

  9. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  10. PCB Fault Detection Using Image Processing

    Science.gov (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.

    2017-08-01

    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  11. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  12. The image of psychology programs: the value of the instrumental-symbolic framework.

    Science.gov (United States)

    Van Hoye, Greet; Lievens, Filip; De Soete, Britt; Libbrecht, Nele; Schollaert, Eveline; Baligant, Dimphna

    2014-01-01

    As competition for funding and students intensifies, it becomes increasingly important for psychology programs to have an image that is attractive and makes them stand out from other programs. The current study uses the instrumental-symbolic framework from the marketing domain to determine the image of different master's programs in psychology and examines how these image dimensions relate to student attraction and competitor differentiation. The samples consist of both potential students (N = 114) and current students (N = 68) of three psychology programs at a Belgian university: industrial and organizational psychology, clinical psychology, and experimental psychology. The results demonstrate that both instrumental attributes (e.g., interpersonal activities) and symbolic trait inferences (e.g., sincerity) are key components of the image of psychology programs and predict attractiveness as well as differentiation. In addition, symbolic image dimensions seem more important for current students of psychology programs than for potential students.

  13. Cellular automata in image processing and geometry

    CERN Document Server

    Adamatzky, Andrew; Sun, Xianfang

    2014-01-01

    The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...

  14. Image processing unit with fall-back.

    NARCIS (Netherlands)

    2011-01-01

    An image processing unit ( 100,200,300 ) for computing a sequence of output images on basis of a sequence of input images, comprises: a motion estimation unit ( 102 ) for computing a motion vector field on basis of the input images; a quality measurement unit ( 104 ) for computing a value of a

  15. Filters in 2D and 3D Cardiac SPECT Image Processing

    Directory of Open Access Journals (Sweden)

    Maria Lyra

    2014-01-01

    Full Text Available Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast.

  16. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  17. SENTINEL-2 LEVEL 1 PRODUCTS AND IMAGE PROCESSING PERFORMANCES

    Directory of Open Access Journals (Sweden)

    S. J. Baillarin

    2012-07-01

    Full Text Available In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES program, the European Space Agency (ESA is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km, a high revisit (5 days with two satellites, a high resolution (10 m, 20 m and 60 m and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains. In this context, the Centre National d'Etudes Spatiales (CNES supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes, the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands; and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame

  18. SENTINEL-2 Level 1 Products and Image Processing Performances

    Science.gov (United States)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES) program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km), a high revisit (5 days with two satellites), a high resolution (10 m, 20 m and 60 m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes), the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands); and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame. The

  19. Musashi dynamic image processing system

    International Nuclear Information System (INIS)

    Murata, Yutaka; Mochiki, Koh-ichi; Taguchi, Akira

    1992-01-01

    In order to produce transmitted neutron dynamic images using neutron radiography, a real time system called Musashi dynamic image processing system (MDIPS) was developed to collect, process, display and record image data. The block diagram of the MDIPS is shown. The system consists of a highly sensitive, high resolution TV camera driven by a custom-made scanner, a TV camera deflection controller for optimal scanning, which adjusts to the luminous intensity and the moving speed of an object, a real-time corrector to perform the real time correction of dark current, shading distortion and field intensity fluctuation, a real time filter for increasing the image signal to noise ratio, a video recording unit and a pseudocolor monitor to realize recording in commercially available products and monitoring by means of the CRTs in standard TV scanning, respectively. The TV camera and the TV camera deflection controller utilized for producing still images can be applied to this case. The block diagram of the real-time corrector is shown. Its performance is explained. Linear filters and ranked order filters were developed. (K.I.)

  20. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  1. A comparison of subtracted images from dental subtraction programs

    International Nuclear Information System (INIS)

    Han, Won Jeong

    2002-01-01

    To compare the standard deviation of gray levels on digital subtracted images obtained by different dental subtraction programs. Paired periapical films were taken at the lower premolar and molar areas of the phantoms involving human mandible. The bite registration group used Rinn XCP equipment and bite registration material, based on polyvinyl siloxane, for standardization. The no bite registration group used only Rinn XCP equipment. The periapical film images were digitized at 1200 dpi resolution and 256 gray levels by a flat bed scanner with transparency unit. Dental digital subtraction programs used for this study were Subtractor (Biomedisys Co., Korea) and Emago (Oral Diagnostic Systems, The Netherlands). To measure the similarities between the subtracted images, the standard deviations of the gray levels were obtained using a histogram of subtracted images, which were then analyzed statistically. Subtracted images obtained by using the Emago program without manual selection of corresponding points showed the lowest standard deviation of gray levels (p<0.01). And the standard deviation of gray levels was lower in subtracted images in the group of a bite registration than in the group of no use of bite registration (p<0.01). Digital radiographic subtraction without manual selection of reference points was found to be a convenient and superior method.

  2. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  3. Decoupling algorithms from schedules for easy optimization of image processing pipelines

    OpenAIRE

    Adams, Andrew; Paris, Sylvain; Levoy, Marc; Ragan-Kelley, Jonathan Millar; Amarasinghe, Saman P.; Durand, Fredo

    2012-01-01

    Using existing programming tools, writing high-performance image processing code requires sacrificing readability, portability, and modularity. We argue that this is a consequence of conflating what computations define the algorithm, with decisions about storage and the order of computation. We refer to these latter two concerns as the schedule, including choices of tiling, fusion, recomputation vs. storage, vectorization, and parallelism. We propose a representation for feed-forward imagi...

  4. Development of Geometrical Quality Control Real-time Analysis Program using an Electronic Portal Imaging

    International Nuclear Information System (INIS)

    Lee, Sang Rok; Jung, Kyung Yong; Jang, Min Sun; Lee, Byung Gu; Kwon, Young Ho

    2012-01-01

    To develop a geometrical quality control real-time analysis program using an electronic portal imaging to replace film evaluation method. A geometrical quality control item was established with the Eclipse treatment planning system (Version 8.1, Varian, USA) after the Electronic Portal Imaging Device (EPID) took care of the problems occurring from the fixed substructure of the linear accelerator (CL-iX, Varian, USA). Electronic portal image (single exposure before plan) was created at the treatment room's 4DTC (Version 10.2, Varian, USA) and a beam was irradiated in accordance with each item. The gaining the entire electronic portal imaging at the Off-line review and was evaluated by a self-developed geometrical quality control real-time analysis program. As for evaluation methods, the intra-fraction error was analyzed by executing 5 times in a row under identical conditions and procedures on the same day, and in order to confirm the infer-fraction error, it was executed for 10 days under identical conditions of all procedures and was compared with the film evaluation method using an Iso-align quality control device. Measurement and analysis time was measured by sorting the time into from the device setup to data achievement and the time amount after the time until the completion of analysis and the convenience of the users and execution processes were compared. The intra-fraction error values for each average 0.1, 0.2, 0.3, 0.2 mm at light-radiation field coincidence, collimator rotation axis, couch rotation axis and gantry rotation axis. By checking the infer-fraction error through 10 days of continuous quality control, the error values obtained were average 1.7, 1.4, 0.7, 1.1 mm for each item. Also, the measurement times were average 36 minutes, 15 minutes for the film evaluation method and electronic portal imaging system, and the analysis times were average 30 minutes, 22 minutes. When conducting a geometrical quality control using an electronic portal imaging

  5. STREAM: A First Programming Process

    DEFF Research Database (Denmark)

    Caspersen, Michael Edelgaard; Kölling, Michael

    2009-01-01

    to derive a programming process, STREAM, designed specifically for novices. STREAM is a carefully down-scaled version of a full and rich agile software engineering process particularly suited for novices learning object-oriented programming. In using it we hope to achieve two things: to help novice......Programming is recognized as one of seven grand challenges in computing education. Decades of research have shown that the major problems novices experience are composition-based—they may know what the individual programming language constructs are, but they do not know how to put them together....... Despite this fact, textbooks, educational practice, and programming education research hardly address the issue of teaching the skills needed for systematic development of programs. We provide a conceptual framework for incremental program development, called Stepwise Improvement, which unifies best...

  6. X-ray imaging in the laser-fusion program

    International Nuclear Information System (INIS)

    McCall, G.H.

    1977-01-01

    Imaging devices which are used or planned for x-ray imaging in the laser-fusion program are discussed. Resolution criteria are explained, and a suggestion is made for using the modulation transfer function as a uniform definition of resolution for these devices

  7. Machine Learning: developing an image recognition program : with Python, Scikit Learn and OpenCV

    OpenAIRE

    Nguyen, Minh

    2016-01-01

    Machine Learning is one of the most debated topic in computer world these days, especially after the first Computer Go program has beaten human Go world champion. Among endless application of Machine Learning, image recognition, which problem is processing enormous amount of data from dynamic input. This thesis will present the basic concept of Machine Learning, Machine Learning algorithms, Python programming language and Scikit Learn – a simple and efficient tool for data analysis in P...

  8. Image restoration and processing methods

    International Nuclear Information System (INIS)

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  9. Early skin tumor detection from microscopic images through image processing

    International Nuclear Information System (INIS)

    Siddiqi, A.A.; Narejo, G.B.; Khan, A.M.

    2017-01-01

    The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing) allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface) that is generated for the algorithm makes the system user friendly. (author)

  10. iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM

    International Nuclear Information System (INIS)

    Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.

    2011-01-01

    A new graphical user interface to the MOSFLM program has been developed to simplify the processing of macromolecular diffraction data. The interface, iMOSFLM, allows data processing via a series of clearly defined tasks and provides visual feedback on the progress of each stage. iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed

  11. Image Quality Improvement after Implementation of a CT Accreditation Program

    International Nuclear Information System (INIS)

    Kim, You Sung; Jung, Seung Eun; Choi, Byung Gil; Shin, Yu Ri; Hwang, Seong Su; Ku, Young Mi; Lim, Yeon Soo; Lee, Jae Mun

    2010-01-01

    The purpose of this study was to evaluate any improvement in the quality of abdominal CTs after the utilization of the nationally based accreditation program. Approval was obtained from the Institutional Review Board, and informed consent was waived. We retrospectively analyzed 1,011 outside abdominal CTs, from 2003 to 2007. We evaluated images using a fill-up sheet form of the national accreditation program, and subjectively by grading for the overall CT image quality. CT scans were divided into two categories according to time periods; before and after the implementation of the accreditation program. We compared CT scans between two periods according to parameters pertaining to the evaluation of images. We determined whether there was a correlation between the results of a subjective assessment of the image quality and the evaluation scores of the clinical image. The following parameters were significantly different after the implementation of the accreditation program: identifying data, display parameters, scan length, spatial and contrast resolution, window width and level, optimal contrast enhancement, slice thickness, and total score. The remaining parameters were not significantly different between scans obtained from the two different periods: scan parameters, film quality, and artifacts. After performing the CT accreditation program, the quality of the outside abdominal CTs show marked improvement, especially for the parameters related to the scanning protocol

  12. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  13. Spatial Data Exploring by Satellite Image Distributed Processing

    Science.gov (United States)

    Mihon, V. D.; Colceriu, V.; Bektas, F.; Allenbach, K.; Gvilava, M.; Gorgan, D.

    2012-04-01

    Our society needs and environmental predictions encourage the applications development, oriented on supervising and analyzing different Earth Science related phenomena. Satellite images could be explored for discovering information concerning land cover, hydrology, air quality, and water and soil pollution. Spatial and environment related data could be acquired by imagery classification consisting of data mining throughout the multispectral bands. The process takes in account a large set of variables such as satellite image types (e.g. MODIS, Landsat), particular geographic area, soil composition, vegetation cover, and generally the context (e.g. clouds, snow, and season). All these specific and variable conditions require flexible tools and applications to support an optimal search for the appropriate solutions, and high power computation resources. The research concerns with experiments on solutions of using the flexible and visual descriptions of the satellite image processing over distributed infrastructures (e.g. Grid, Cloud, and GPU clusters). This presentation highlights the Grid based implementation of the GreenLand application. The GreenLand application development is based on simple, but powerful, notions of mathematical operators and workflows that are used in distributed and parallel executions over the Grid infrastructure. Currently it is used in three major case studies concerning with Istanbul geographical area, Rioni River in Georgia, and Black Sea catchment region. The GreenLand application offers a friendly user interface for viewing and editing workflows and operators. The description involves the basic operators provided by GRASS [1] library as well as many other image related operators supported by the ESIP platform [2]. The processing workflows are represented as directed graphs giving the user a fast and easy way to describe complex parallel algorithms, without having any prior knowledge of any programming language or application commands

  14. Spectral imaging toolbox: segmentation, hyperstack reconstruction, and batch processing of spectral images for the determination of cell and model membrane lipid order.

    Science.gov (United States)

    Aron, Miles; Browning, Richard; Carugo, Dario; Sezgin, Erdinc; Bernardino de la Serna, Jorge; Eggeling, Christian; Stride, Eleanor

    2017-05-12

    Spectral imaging with polarity-sensitive fluorescent probes enables the quantification of cell and model membrane physical properties, including local hydration, fluidity, and lateral lipid packing, usually characterized by the generalized polarization (GP) parameter. With the development of commercial microscopes equipped with spectral detectors, spectral imaging has become a convenient and powerful technique for measuring GP and other membrane properties. The existing tools for spectral image processing, however, are insufficient for processing the large data sets afforded by this technological advancement, and are unsuitable for processing images acquired with rapidly internalized fluorescent probes. Here we present a MATLAB spectral imaging toolbox with the aim of overcoming these limitations. In addition to common operations, such as the calculation of distributions of GP values, generation of pseudo-colored GP maps, and spectral analysis, a key highlight of this tool is reliable membrane segmentation for probes that are rapidly internalized. Furthermore, handling for hyperstacks, 3D reconstruction and batch processing facilitates analysis of data sets generated by time series, z-stack, and area scan microscope operations. Finally, the object size distribution is determined, which can provide insight into the mechanisms underlying changes in membrane properties and is desirable for e.g. studies involving model membranes and surfactant coated particles. Analysis is demonstrated for cell membranes, cell-derived vesicles, model membranes, and microbubbles with environmentally-sensitive probes Laurdan, carboxyl-modified Laurdan (C-Laurdan), Di-4-ANEPPDHQ, and Di-4-AN(F)EPPTEA (FE), for quantification of the local lateral density of lipids or lipid packing. The Spectral Imaging Toolbox is a powerful tool for the segmentation and processing of large spectral imaging datasets with a reliable method for membrane segmentation and no ability in programming required. The

  15. Near Real-Time Photometric Data Processing for the Solar Mass Ejection Imager (SMEI)

    Science.gov (United States)

    Hick, P. P.; Buffington, A.; Jackson, B. V.

    2004-12-01

    The Solar Mass Ejection Imager (SMEI) records a photometric white-light response of the interplanetary medium from Earth over most of the sky in near real time. In the first two years of operation the instrument has recorded the inner heliospheric response to several hundred CMEs, including the May 28, 2003 and the October 28, 2003 halo CMEs. In this preliminary work we present the techniques required to process the SMEI data from the time the raw CCD images become available to their final assembly in photometrically accurate maps of the sky brightness relative to a long-term time base. Processing of the SMEI data includes integration of new data into the SMEI data base; a conditioning program that removes from the raw CCD images an electronic offset ("pedestal") and a temperature-dependent dark current pattern; an "indexing" program that places these CCD images onto a high-resolution sidereal grid using known spacecraft pointing information. At this "indexing" stage further conditioning removes the bulk of the the effects of high-energy-particle hits ("cosmic rays"), space debris inside the field of view, and pixels with a sudden state change ("flipper pixels"). Once the high-resolution grid is produced, it is reformatted to a lower-resolution set of sidereal maps of sky brightness. From these sidereal maps we remove bright stars, background stars, and a zodiacal cloud model (their brightnesses are retained as additional data products). The final maps can be represented in any convenient sky coordinate system. Common formats are Sun-centered Hammer-Aitoff or "fisheye" maps. Time series at selected locations on these maps are extracted and processed further to remove aurorae, variable stars and other unwanted signals. These time series (with a long-term base removed) are used in 3D tomographic reconstructions. The data processing is distributed over multiple PCs running Linux, and, runs as much as possible automatically using recurring batch jobs ('cronjobs'). The

  16. A novice's process of object-oriented programming

    DEFF Research Database (Denmark)

    Caspersen, Michael Edelgaard; Kölling, Michael

    2006-01-01

    Exposing students to the process of programming is merely implied but not explicitly addressed in texts on programming which appear to deal with 'program' as a noun rather than as a verb.We present a set of principles and techniques as well as an informal but systematic process of decomposing...... a programming problem. Two examples are used to demonstrate the application of process and techniques.The process is a carefully down-scaled version of a full and rich software engineering process particularly suited for novices learning object-oriented programming. In using it, we hope to achieve two things...

  17. Upper Midwest Gap Analysis Program, Image Processing Protocol

    National Research Council Canada - National Science Library

    Lillesand, Thomas

    1998-01-01

    This document presents a series of technical guidelines by which land cover information is being extracted from Landsat Thematic Mapper data as part of the Upper Midwest Gap Analysis Program (UMGAP...

  18. JIP: Java image processing on the Internet

    Science.gov (United States)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  19. Multispectral image enhancement processing for microsat-borne imager

    Science.gov (United States)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  20. Advanced Secure Optical Image Processing for Communications

    Science.gov (United States)

    Al Falou, Ayman

    2018-04-01

    New image processing tools and data-processing network systems have considerably increased the volume of transmitted information such as 2D and 3D images with high resolution. Thus, more complex networks and long processing times become necessary, and high image quality and transmission speeds are requested for an increasing number of applications. To satisfy these two requests, several either numerical or optical solutions were offered separately. This book explores both alternatives and describes research works that are converging towards optical/numerical hybrid solutions for high volume signal and image processing and transmission. Without being limited to hybrid approaches, the latter are particularly investigated in this book in the purpose of combining the advantages of both techniques. Additionally, pure numerical or optical solutions are also considered since they emphasize the advantages of one of the two approaches separately.

  1. PARAGON-IPS: A Portable Imaging Software System For Multiple Generations Of Image Processing Hardware

    Science.gov (United States)

    Montelione, John

    1989-07-01

    Paragon-IPS is a comprehensive software system which is available on virtually all generations of image processing hardware. It is designed for an image processing department or a scientist and engineer who is doing image processing full-time. It is being used by leading R&D labs in government agencies and Fortune 500 companies. Applications include reconnaissance, non-destructive testing, remote sensing, medical imaging, etc.

  2. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  3. An integral design strategy combining optical system and image processing to obtain high resolution images

    Science.gov (United States)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  4. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  5. An overview of digital image processing in the field of nuclear medicine

    International Nuclear Information System (INIS)

    Okuyama, Yasuo

    1992-01-01

    The current status and remaining problems of digital image processing in nuclear medicine were discussed. Digitalization of nuclear medicine images has made it possible, in conjunction with computers, to obtain new information (Fourier analysis, Factor analysis, etc.) with added value from images, in place of the fixed concepts that had formerly been drawn only from images. However, the basis of this technology is the special QC and QA of nuclear medicine examinations, and those techniques have not yet been adequately established. The advantage of digitalization is the flexibility that comes from the programs, but the element of the subjectivity of each individual plays a large role, and it can be said that there is also the risk that the logic of image diagnosis established to date will be destroyed. Accordingly, the creation of digital image processing technique with specifications will give birth to standardized digital nuclear medicine images, and these development will certainly lead to progress in nuclear medicine diagnosis. In addition, in comparison with other modalities, the field of nuclear medicine involves a lesser amount of information, and this simplifiers the digitalization of images. At present, equipment is being designed and developed with incorporation of the concept of a work station. A serious problem that remains in this field is the standardization of image transmission. In summary, the main problem that must be solved in the field of nuclear medicine examinations is the establishment of QC and QA methods and practical algorithms for the software. It is hoped that there will be open access to information, etc., related to the software. (author)

  6. The Pan-STARRS PS1 Image Processing Pipeline

    Science.gov (United States)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  7. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan

    2015-04-01

    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  8. Automatic tissue image segmentation based on image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  9. GAP: yet another image processing system for solar observations.

    Science.gov (United States)

    Keller, C. U.

    GAP is a versatile, interactive image processing system for analyzing solar observations, in particular extended time sequences, and for preparing publication quality figures. It consists of an interpreter that is based on a language with a control flow similar to PASCAL and C. The interpreter may be accessed from a command line editor and from user-supplied functions, procedures, and command scripts. GAP is easily expandable via external FORTRAN programs that are linked to the GAP interface routines. The current version of GAP runs on VAX, DECstation, Sun, and Apollo computers. Versions for MS-DOS and OS/2 are in preparation.

  10. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various samples. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  11. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various materials. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  12. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  13. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance... and spatial co-ordinates into discrete components. The mathematical concepts involved are the sampling and transform theory. Two dimensional transforms are used for image enhancement, restoration, encoding and description too. The main objective of the image...

  14. Integrating digital topology in image-processing libraries.

    Science.gov (United States)

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  15. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  16. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  17. Signal Processing in Medical Ultrasound B-mode Imaging

    International Nuclear Information System (INIS)

    Song, Tai Kyong

    2000-01-01

    Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing

  18. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    Science.gov (United States)

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of

  19. Bio-inspired approach to multistage image processing

    Science.gov (United States)

    Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan

    2017-08-01

    Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.

  20. Facial biometrics of Yorubas of Nigeria using Akinlolu-Raji image-processing algorithm

    Directory of Open Access Journals (Sweden)

    Adelaja Abdulazeez Akinlolu

    2016-01-01

    Full Text Available Background: Forensic anthropology deals with the establishment of human identity using genetics, biometrics, and face recognition technology. This study aims to compute facial biometrics of Yorubas of Osun State of Nigeria using a novel Akinlolu-Raji image-processing algorithm. Materials and Methods: Three hundred Yorubas of Osun State (150 males and 150 females, aged 15–33 years were selected as subjects for the study with informed consents and when established as Yorubas by parents and grandparents. Height, body weight, and facial biometrics (evaluated on three-dimensional [3D] facial photographs were measured on all subjects. The novel Akinlolu-Raji image-processing algorithm for forensic face recognition was developed using the modified row method of computer programming. Facial width, total face height, short forehead height, long forehead height, upper face height, nasal bridge length, nose height, morphological face height, and lower face height computed from readings of the Akinlolu-Raji image-processing algorithm were analyzed using z-test (P ≤ 0.05 of 2010 Microsoft Excel statistical software. Results: Statistical analyzes of facial measurements showed nonsignificant higher mean values (P > 0.05 in Yoruba males compared to females. Yoruba males and females have the leptoprosopic face type based on classifications of face types from facial indices. Conclusions: Akinlolu-Raji image-processing algorithm can be employed for computing anthropometric, forensic, diagnostic, or any other measurements on 2D and 3D images, and data computed from its readings can be converted to actual or life sizes as obtained in 1D measurements. Furthermore, Yoruba males and females have the leptoprosopic face type.

  1. Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness

    Science.gov (United States)

    Singh, Preetpal

    to detect equipment failure and identify defective products at the assembly line. The research work in this thesis combines machine vision and image processing technology to build a digital imaging and processing system for monitoring and measuring lake ice thickness in real time. An ultra-compact USB camera is programmed to acquire and transmit high resolution imagery for processing with MATLAB Image Processing toolbox. The image acquisition and transmission process is fully automated; image analysis is semi-automated and requires limited user input. Potential design changes to the prototype and ideas on fully automating the imaging and processing procedure are presented to conclude this research work.

  2. Crack Length Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    1990-01-01

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better then that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  3. Automated processing of X-ray images in medicine

    International Nuclear Information System (INIS)

    Babij, Ya.S.; B'yalyuk, Ya.O.; Yanovich, I.A.; Lysenko, A.V.

    1991-01-01

    Theoretical and practical achievements in application of computing technology means for processing of X-ray images in medicine were generalized. The scheme of the main directions and tasks of processing of X-ray images was given and analyzed. The principal problems appeared in automated processing of X-ray images were distinguished. It is shown that for interpretation of X-ray images it is expedient to introduce a notion of relative operating characteristic (ROC) of a roentgenologist. Every point on ROC curve determines the individual criteria of roentgenologist to put a positive diagnosis for definite situation

  4. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Twofold processing for denoising ultrasound medical images.

    Science.gov (United States)

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  6. The chinese space program as the image instrument of the great China

    Directory of Open Access Journals (Sweden)

    Daniel Lemus Delgado

    2012-10-01

    Full Text Available This article analyzes the Chinese space program and how the bureaucratic elite acts to convert China as a leading nation in international arena. This article assumes that, beyond the scientific advances that space exploration has in multiple fields of knowledge, the support to the space program depicts a way to project a positive image of China. This image is a China rising in the international community. The author discusses how space missions and the discourse around the space program strengthen national pride. Thus, China’s space program projects the image of a Greater China. The article concludes that the space program shows that China is modernizing rapidly and is able to be a world power.

  7. Applied high-speed imaging for the icing research program at NASA Lewis Research Center

    Science.gov (United States)

    Slater, Howard; Owens, Jay; Shin, Jaiwon

    1992-01-01

    The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.

  8. A flexible software architecture for scalable real-time image and video processing applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  9. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  10. The Digital Microscope and Its Image Processing Utility

    Directory of Open Access Journals (Sweden)

    Tri Wahyu Supardi

    2011-12-01

    Full Text Available Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images of the object being observed. The proposed microscope is constructed from hardware components that can be easily found in Indonesia. The image processing software is capable of performing brightness adjustment, contrast enhancement, histogram equalization, scaling and cropping. The proposed digital microscope has a maximum magnification of 1600x, and image resolution can be varied from 320x240 pixels up to 2592x1944 pixels. The microscope was tested with various objects with a variety of magnification, and image processing was carried out on the image of the object. The results showed that the digital microscope and its image processing system were capable of enhancing the observed object and other operations in accordance with the user need. The digital microscope has eliminated the need for direct observation by human eye as with the traditional microscope.

  11. Image processing on the image with pixel noise bits removed

    Science.gov (United States)

    Chuang, Keh-Shih; Wu, Christine

    1992-06-01

    Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.

  12. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego

    2017-01-01

    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  13. Fast image acquisition and processing on a TV camera-based portal imaging system

    International Nuclear Information System (INIS)

    Baier, K.; Meyer, J.

    2005-01-01

    The present paper describes the fast acquisition and processing of portal images directly from a TV camera-based portal imaging device (Siemens Beamview Plus trademark). This approach employs not only hard- and software included in the standard package installed by the manufacturer (in particular the frame grabber card and the Matrox(tm) Intellicam interpreter software), but also a software tool developed in-house for further processing and analysis of the images. The technical details are presented, including the source code for the Matrox trademark interpreter script that enables the image capturing process. With this method it is possible to obtain raw images directly from the frame grabber card at an acquisition rate of 15 images per second. The original configuration by the manufacturer allows the acquisition of only a few images over the course of a treatment session. The approach has a wide range of applications, such as quality assurance (QA) of the radiation beam, real-time imaging, real-time verification of intensity-modulated radiation therapy (IMRT) fields, and generation of movies of the radiation field (fluoroscopy mode). (orig.)

  14. Post-processing of digital images.

    Science.gov (United States)

    Perrone, Luca; Politi, Marco; Foschi, Raffaella; Masini, Valentina; Reale, Francesca; Costantini, Alessandro Maria; Marano, Pasquale

    2003-01-01

    Post-processing of bi- and three-dimensional images plays a major role for clinicians and surgeons in both diagnosis and therapy. The new spiral (single and multislice) CT and MRI machines have allowed better quality of images. With the associated development of hardware and software, post-processing has become indispensable in many radiologic applications in order to address precise clinical questions. In particular, in CT the acquisition technique is fundamental and should be targeted and optimized to obtain good image reconstruction. Multiplanar reconstructions ensure simple, immediate display of sections along different planes. Three-dimensional reconstructions include numerous procedures: multiplanar techniques as maximum intensity projections (MIP); surface rendering techniques as the Shaded Surface Display (SSD); volume techniques as the Volume Rendering Technique; techniques of virtual endoscopy. In surgery computer-aided techniques as the neuronavigator, which with information provided by neuroimaging helps the neurosurgeon in simulating and performing the operation, are extremely interesting.

  15. Digital image processing techniques in archaeology

    Digital Repository Service at National Institute of Oceanography (India)

    Santanam, K.; Vaithiyanathan, R.; Tripati, S.

    Digital image processing involves the manipulation and interpretation of digital images with the aid of a computer. This form of remote sensing actually began in the 1960's with a limited number of researchers analysing multispectral scanner data...

  16. Radiology image orientation processing for workstation display

    Science.gov (United States)

    Chang, Chung-Fu; Hu, Kermit; Wilson, Dennis L.

    1998-06-01

    Radiology images are acquired electronically using phosphor plates that are read in Computed Radiology (CR) readers. An automated radiology image orientation processor (RIOP) for determining the orientation for chest images and for abdomen images has been devised. In addition, the chest images are differentiated as front (AP or PA) or side (Lateral). Using the processing scheme outlined, hospitals will improve the efficiency of quality assurance (QA) technicians who orient images and prepare the images for presentation to the radiologists.

  17. Image processing by use of the digital cross-correlator

    International Nuclear Information System (INIS)

    Katou, Yoshinori

    1982-01-01

    We manufactured for trial an instrument which achieved the image processing using digital correlators. A digital correlator perform 64-bit parallel correlation at 20 MH. The output of a digital correlator is a 7-bit word representing. An A-D converter is used to quantize it a precision of six bits. The resulting 6-bit word is fed to six correlators, wired in parallel. The image processing achieved in 12 bits, whose digital outputs converted an analog signal by a D-A converter. This instrument is named the digital cross-correlator. The method which was used in the image processing system calculated the convolution with the digital correlator. It makes various digital filters. In the experiment with the image processing video signals from TV camera were used. The digital image processing time was approximately 5 μs. The contrast was enhanced and smoothed. The digital cross-correlator has the image processing of 16 sorts, and was produced inexpensively. (author)

  18. Measurement and Image Processing Techniques for Particle Image Velocimetry Using Solid-Phase Carbon Dioxide

    Science.gov (United States)

    2014-03-27

    stereoscopic PIV: the angular displacement configuration and the translation configuration. The angular displacement configuration is most commonly used today...images were processed using ImageJ, an open-source, Java -based image processing software available from the National Institute of Health (NIH). The

  19. Digital Image Processing Overview For Helmet Mounted Displays

    Science.gov (United States)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  20. ARTIP: Automated Radio Telescope Image Processing Pipeline

    Science.gov (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  1. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  2. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  3. Development of a prototype chest digital tomosynthesis (CDT) R/F system with fast image reconstruction using graphics processing unit (GPU) programming

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sunghoon, E-mail: choi.sh@yonsei.ac.kr [Department of Radiological Science, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 220-710 (Korea, Republic of); Lee, Seungwan [Department of Radiological Science, College of Medical Science, Konyang University, 158 Gwanjeodong-ro, Daejeon, 308-812 (Korea, Republic of); Lee, Haenghwa [Department of Radiological Science, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 220-710 (Korea, Republic of); Lee, Donghoon; Choi, Seungyeon [Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 220-710 (Korea, Republic of); Shin, Jungwook [LISTEM Corporation, 94 Donghwagongdan-ro, Munmak-eup, Wonju (Korea, Republic of); Seo, Chang-Woo [Department of Radiological Science, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 220-710 (Korea, Republic of); Kim, Hee-Joung, E-mail: hjk1@yonsei.ac.kr [Department of Radiological Science, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 220-710 (Korea, Republic of); Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 220-710 (Korea, Republic of)

    2017-03-11

    Digital tomosynthesis offers the advantage of low radiation doses compared to conventional computed tomography (CT) by utilizing small numbers of projections (~80) acquired over a limited angular range. It produces 3D volumetric data, although there are artifacts due to incomplete sampling. Based upon these characteristics, we developed a prototype digital tomosynthesis R/F system for applications in chest imaging. Our prototype chest digital tomosynthesis (CDT) R/F system contains an X-ray tube with high power R/F pulse generator, flat-panel detector, R/F table, electromechanical radiographic subsystems including a precise motor controller, and a reconstruction server. For image reconstruction, users select between analytic and iterative reconstruction methods. Our reconstructed images of Catphan700 and LUNGMAN phantoms clearly and rapidly described the internal structures of phantoms using graphics processing unit (GPU) programming. Contrast-to-noise ratio (CNR) values of the CTP682 module of Catphan700 were higher in images using a simultaneous algebraic reconstruction technique (SART) than in those using filtered back-projection (FBP) for all materials by factors of 2.60, 3.78, 5.50, 2.30, 3.70, and 2.52 for air, lung foam, low density polyethylene (LDPE), Delrin{sup ®} (acetal homopolymer resin), bone 50% (hydroxyapatite), and Teflon, respectively. Total elapsed times for producing 3D volume were 2.92 s and 86.29 s on average for FBP and SART (20 iterations), respectively. The times required for reconstruction were clinically feasible. Moreover, the total radiation dose from our system (5.68 mGy) was lower than that of conventional chest CT scan. Consequently, our prototype tomosynthesis R/F system represents an important advance in digital tomosynthesis applications.

  4. Processing Of Binary Images

    Science.gov (United States)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  5. Application of two-dimensional crystallography and image processing to atomic resolution Z-contrast images.

    Science.gov (United States)

    Morgan, David G; Ramasse, Quentin M; Browning, Nigel D

    2009-06-01

    Zone axis images recorded using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM or Z-contrast imaging) reveal the atomic structure with a resolution that is defined by the probe size of the microscope. In most cases, the full images contain many sub-images of the crystal unit cell and/or interface structure. Thanks to the repetitive nature of these images, it is possible to apply standard image processing techniques that have been developed for the electron crystallography of biological macromolecules and have been used widely in other fields of electron microscopy for both organic and inorganic materials. These methods can be used to enhance the signal-to-noise present in the original images, to remove distortions in the images that arise from either the instrumentation or the specimen itself and to quantify properties of the material in ways that are difficult without such data processing. In this paper, we describe briefly the theory behind these image processing techniques and demonstrate them for aberration-corrected, high-resolution HAADF-STEM images of Si(46) clathrates developed for hydrogen storage.

  6. Report on the Imaging Workshop for the Genomes to Life Program, April 16-18, 2002

    Energy Technology Data Exchange (ETDEWEB)

    Colson, STEVEN

    2003-08-04

    This report is a result of the Imaging Workshop for the Genomes to Life (GTL) program held April 16-19, 2002, in Charlotte, North Carolina. The meeting was sponsored by the Office of Biological and Environmental Research and the Office of Advanced Scientific Computing Research of the U.S. Department of Energy's (DOE) Office of Science. The purpose of the workshop was to project a broad vision for future needs and determine the value of imaging to GTL program research. The workshop included four technical sessions with plenary lectures on biology and technology perspectives and technical presentations on needs and approaches as they related to the following areas of the GTL program: (1) Molecular machines (protein complexes); (2) Intracellular and cellular structure, function, and processes; (3) Multicellular: Monoclonal and heterogeneous multicellular systems, cell-cell signaling, and model systems; and (4) Cells in situ and in vivo: Bacteria in the natural environment, microenvironment, and in vivo systems.

  7. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    Science.gov (United States)

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  8. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units.

    Science.gov (United States)

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A

    2013-02-01

    Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction.

  9. Image processing of early gastric cancer cases

    International Nuclear Information System (INIS)

    Inamoto, Kazuo; Umeda, Tokuo; Inamura, Kiyonari

    1992-01-01

    Computer image processing was used to enhance gastric lesions in order to improve the detection of stomach cancer. Digitization was performed in 25 cases of early gastric cancer that had been confirmed surgically and pathologically. The image processing consisted of grey scale transformation, edge enhancement (Sobel operator), and high-pass filtering (unsharp masking). Grey scale transformation improved image quality for the detection of gastric lesions. The Sobel operator enhanced linear and curved margins, and consequently, suppressed the rest. High-pass filtering with unsharp masking was superior to visualization of the texture pattern on the mucosa. Eight of 10 small lesions (less than 2.0 cm) were successfully demonstrated. However, the detection of two lesions in the antrum, was difficult even with the aid of image enhancement. In the other 15 lesions (more than 2.0 cm), the tumor surface pattern and margin between the tumor and non-pathological mucosa were clearly visualized. Image processing was considered to contribute to the detection of small early gastric cancer lesions by enhancing the pathological lesions. (author)

  10. Autonomous control systems: applications to remote sensing and image processing

    Science.gov (United States)

    Jamshidi, Mohammad

    2001-11-01

    One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.

  11. REVIEW OF MATHEMATICAL METHODS AND ALGORITHMS OF MEDICAL IMAGE PROCESSING ON THE EXAMPLE OF TECHNOLOGY OF MEDICAL IMAGE PROCESSING FROM WOLFRAM MATHEMATICA

    Directory of Open Access Journals (Sweden)

    О. E. Prokopchenko

    2015-09-01

    Full Text Available The article analyzes the basic methods and algorithms of mathematical processing of medical images as objects of computer mathematics. The presented methods and computer algorithms of mathematics relevant and may find application in the field of medical imaging - automated processing of images; as a tool for measurement and determination the optical parameters; identification and formation of medical images database. Methods and computer algorithms presented in the article & based on Wolfram Mathematica are also relevant to the problem of modern medical education. As an example of Wolfram Mathematica may be considered appropriate demonstration, such as recognition of special radiographs and morphological imaging. These methods are used to improve the diagnostic significance and value of medical (clinical research and can serve as an educational interactive demonstration. Implementation submitted individual methods and algorithms of computer Wolfram Mathematics contributes, in general, the optimization process of practical processing and presentation of medical images.

  12. Computer-assisted analyses of (/sup 14/C)2-DG autoradiographs employing a general purpose image processing system

    Energy Technology Data Exchange (ETDEWEB)

    Porro, C; Biral, G P [Modena Univ. (Italy). Ist. di Fisiologia Umana; Fonda, S; Baraldi, P [Modena Univ. (Italy). Lab. di Bioingegneria della Clinica Oculistica; Cavazzuti, M [Modena Univ. (Italy). Clinica Neurologica

    1984-09-01

    A general purpose image processing system is described including B/W TV camera, high resolution image processor and display system (TESAK VDC 501), computer (DEC PDP 11/23) and monochrome and color monitors. Images may be acquired from a microscope equipped with a TV camera or using the TV in direct viewing; the A/D converter and the image processor provides fast (40 ms) and precise (512x512 data points) digitization of TV signal with a 256 gray levels maximum resolution. Computer programs have been developed in order to perform qualitative and quantitative analyses of autoradiographs obtained with the 2-DG method, which are written in FORTRAN and MACRO 11 Assembly Language. They include: (1) procedures designed to recognize errors in acquisition due to possible image shading and correct them via software; (2) routines suitable for qualitative analyses of the whole image or selected regions of it, providing the opportunity for pseudocolor coding, statistics, graphic overlays; (3) programs permitting the conversion of gray levels into metabolic rates of glucose utilization and the display of gray- or color-coded metabolic maps.

  13. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley

    2018-02-01

    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  14. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  15. Automated measurement of pressure injury through image processing.

    Science.gov (United States)

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure

  16. High-speed image processing systems in non-destructive testing

    Science.gov (United States)

    Shashev, D. V.; Shidlovskiy, S. V.

    2017-08-01

    Digital imaging systems are using in most of both industrial and scientific industries. Such systems effectively solve a wide range of tasks in the field of non-destructive testing. There are problems in digital image processing for decades associated with the speed of the operation of such systems, sufficient to efficiently process and analyze video streams in real time, ideally in mobile small-sized devices. In this paper, we consider the use of parallel-pipeline computing architectures in image processing problems using the example of an algorithm for calculating the area of an object on a binary image. The approach used allows us to achieve high-speed performance in the tasks of digital image processing.

  17. Effects of image processing on the detective quantum efficiency

    Science.gov (United States)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  18. Document Image Processing: Going beyond the Black-and-White Barrier. Progress, Issues and Options with Greyscale and Colour Image Processing.

    Science.gov (United States)

    Hendley, Tom

    1995-01-01

    Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…

  19. Development of the image registration program for portal and DRR images in radiation therapy

    International Nuclear Information System (INIS)

    Watanabe, Hiroyuki; Ito, Takeshi; Nakazeko, Kazuma; Tachibana, Atsuhi; Hashimoto, Takeyuki; Shinohara, Hiroyuki

    2012-01-01

    In this article, the authors propose an image registration program of portal images and digitally reconstructed radiography (DRR) images used as simulation images for external beam radiation therapy planning. First, the center of the radiation field in a portal image taken using a computed radiograhy cassette is matched to the center of the portal image. Then scale points projected on a DRR image and the portal image are deleted, and the portal image with the radiation field is extracted. Registration of the DRR and portal images is performed using mutual information as the registration criterion. It was found that the absolute displacement misregistrations in two directions (x, y) were 1.2±0.7 mm and 0.5±0.3 mm, respectively, and rotation disagreement about the z axis 0.3±0.3deg. It was concluded the proposed method was applicable to image registration of portal and DRR images in radiation therapy. (author)

  20. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  1. MR imaging of abnormal synovial processes

    International Nuclear Information System (INIS)

    Quinn, S.F.; Sanchez, R.; Murray, W.T.; Silbiger, M.L.; Ogden, J.; Cochran, C.

    1987-01-01

    MR imaging can directly image abnormal synovium. The authors reviewed over 50 cases with abnormal synovial processes. The abnormalities include Baker cysts, semimembranous bursitis, chronic shoulder bursitis, peroneal tendon ganglion cyst, periarticular abscesses, thickened synovium from rheumatoid and septic arthritis, and synovial hypertrophy secondary to Legg-Calve-Perthes disease. MR imaging has proved invaluable in identifying abnormal synovium, defining the extent and, to a limited degree, characterizing its makeup

  2. Quaternion Fourier transforms for signal and image processing

    CERN Document Server

    Ell, Todd A; Sangwine, Stephen J

    2014-01-01

    Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.

  3. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  4. Automated Processing of Imaging Data through Multi-tiered Classification of Biological Structures Illustrated Using Caenorhabditis elegans.

    Directory of Open Access Journals (Sweden)

    Mei Zhan

    2015-04-01

    Full Text Available Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM. These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a

  5. Automated Processing of Imaging Data through Multi-tiered Classification of Biological Structures Illustrated Using Caenorhabditis elegans.

    Science.gov (United States)

    Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang

    2015-04-01

    Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision

  6. REVIEW OF MATHEMATICAL METHODS AND ALGORITHMS OF MEDICAL IMAGE PROCESSING ON THE EXAMPLE OF TECHNOLOGY OF MEDICAL IMAGE PROCESSING FROM WOLFRAM MATHEMATICS

    Directory of Open Access Journals (Sweden)

    O. Ye. Prokopchenko

    2015-10-01

    Full Text Available The article analyzes the basic methods and algorithms of mathematical processing of medical images as objects of computer mathematics. The presented methods and computer algorithms of mathematics relevant and may find application in the field of medical imaging - automated processing of images; as a tool for measurement and determination the optical parameters; identification and formation of medical images database. Methods and computer algorithms presented in the article and based on Wolfram Mathematica are also relevant to the problem of modern medical education. As an example of Wolfram Mathematics may be considered appropriate demonstration, such as recognition of special radiographs and morphological imaging. These methods are used to improve  the diagnostic significance and value of medical (clinical research and can serve as an educational interactive demonstration. Implementation submitted individual methods and algorithms of computer Wolfram Mathematics contributes, in general, the optimization process of practical processing and presentation of medical images.

  7. Fundamental concepts of digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  8. Fundamental Concepts of Digital Image Processing

    Science.gov (United States)

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  9. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.

    Science.gov (United States)

    Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng

    2017-09-08

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.

  10. An Efficient Secret Key Homomorphic Encryption Used in Image Processing Service

    Directory of Open Access Journals (Sweden)

    Pan Yang

    2017-01-01

    Full Text Available Homomorphic encryption can protect user’s privacy when operating on user’s data in cloud computing. But it is not practical for wide using as the data and services types in cloud computing are diverse. Among these data types, digital image is an important personal data for users. There are also many image processing services in cloud computing. To protect user’s privacy in these services, this paper proposed a scheme using homomorphic encryption in image processing. Firstly, a secret key homomorphic encryption (IGHE was constructed for encrypting image. IGHE can operate on encrypted floating numbers efficiently to adapt to the image processing service. Then, by translating the traditional image processing methods into the operations on encrypted pixels, the encrypted image can be processed homomorphically. That is, service can process the encrypted image directly, and the result after decryption is the same as processing the plain image. To illustrate our scheme, three common image processing instances were given in this paper. The experiments show that our scheme is secure, correct, and efficient enough to be used in practical image processing applications.

  11. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  12. Bayesian image processing in two and three dimensions

    International Nuclear Information System (INIS)

    Hart, H.; Liang, Z.

    1986-01-01

    Tomographic image processing customarily analyzes data acquired over a series of projective orientations. If, however, the point source function (the matrix R) of the system is strongly depth dependent, tomographic information is also obtainable from a series of parallel planar images corresponding to different ''focal'' depths. Bayesian image processing (BIP) was carried out for two and three dimensional spatially uncorrelated discrete amplitude a priori source distributions

  13. Morphology and probability in image processing

    International Nuclear Information System (INIS)

    Fabbri, A.G.

    1985-01-01

    The author presents an analysis of some concepts which relate morphological attributes of digital objects to statistically meaningful measures. Some elementary transformations of binary images are described and examples of applications are drawn from the geological and image analysis domains. Some of the morphological models applicablle in astronomy are discussed. It is shown that the development of new spatially oriented computers leads to more extensive applications of image processing in the geosciences

  14. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    Science.gov (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  15. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    Science.gov (United States)

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  16. Opportunities and applications of medical imaging and image processing techniques for nondestructive testing

    International Nuclear Information System (INIS)

    Song, Samuel Moon Ho; Cho, Jung Ho; Son, Sang Rock; Sung, Je Jonng; Ahn, Hyung Keun; Lee, Jeong Soon

    2002-01-01

    Nondestructive testing (NDT) of structures strives to extract all relevant data regarding the state of the structure without altering its form or properties. The success enjoyed by imaging and image processing technologies in the field of modem medicine forecasts similar success of image processing related techniques both in research and practice of NDT. In this paper, we focus on two particular instances of such applications: a modern vision technique for 3-D profile and shape measurement, and ultrasonic imaging with rendering for 3-D visualization. Ultrasonic imaging of 3-D structures for nondestructive evaluation purposes must provide readily recognizable 3-D images with enough details to clearly show various faults that may or may not be present. As a step towards Improving conspicuity and thus detection of faults, we propose a pulse-echo ultrasonic imaging technique to generate a 3-D image of the 3-D object under evaluation through strategic scanning and processing of the pulse-echo data. This three-dimensional processing and display improves conspicuity of faults and in addition, provides manipulation capabilities, such as pan and rotation of the 3-D structure. As a second application, we consider an image based three-dimensional shape determination system. The shape, and thus the three-dimensional coordinate information of the 3-D object, is determined solely from captured images of the 3-D object from a prescribed set of viewpoints. The approach is based on the shape from silhouette (SFS) technique and the efficacy of the SFS method is tested using a sample data set. This system may be used to visualize the 3-D object efficiently, or to quickly generate initial CAD data for reverse engineering purposes. The proposed system potentially may be used in three dimensional design applications such as 3-D animation and 3-D games.

  17. Apparatus and method X-ray image processing

    International Nuclear Information System (INIS)

    1984-01-01

    The invention relates to a method for X-ray image processing. The radiation passed through the object is transformed into an electric image signal from which the logarithmic value is determined and displayed by a display device. Its main objective is to provide a method and apparatus that renders X-ray images or X-ray subtraction images with strong reduction of stray radiation. (Auth.)

  18. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  19. Analysis of PET hypoxia imaging in the quantitative imaging for personalized cancer medicine program

    International Nuclear Information System (INIS)

    Yeung, Ivan; Driscoll, Brandon; Keller, Harald; Shek, Tina; Jaffray, David; Hedley, David

    2014-01-01

    Quantitative imaging is an important tool in clinical trials of testing novel agents and strategies for cancer treatment. The Quantitative Imaging Personalized Cancer Medicine Program (QIPCM) provides clinicians and researchers participating in multi-center clinical trials with a central repository for their imaging data. In addition, a set of tools provide standards of practice (SOP) in end-to-end quality assurance of scanners and image analysis. The four components for data archiving and analysis are the Clinical Trials Patient Database, the Clinical Trials PACS, the data analysis engine(s) and the high-speed networks that connect them. The program provides a suite of software which is able to perform RECIST, dynamic MRI, CT and PET analysis. The imaging data can be assessed securely from remote and analyzed by researchers with these software tools, or with tools provided by the users and installed at the server. Alternatively, QIPCM provides a service for data analysis on the imaging data according developed SOP. An example of a clinical study in which patients with unresectable pancreatic adenocarcinoma were studied with dynamic PET-FAZA for hypoxia measurement will be discussed. We successfully quantified the degree of hypoxia as well as tumor perfusion in a group of 20 patients in terms of SUV and hypoxic fraction. It was found that there is no correlation between bulk tumor perfusion and hypoxia status in this cohort. QIPCM also provides end-to-end QA testing of scanners used in multi-center clinical trials. Based on quality assurance data from multiple CT-PET scanners, we concluded that quality control of imaging was vital in the success in multi-center trials as different imaging and reconstruction parameters in PET imaging could lead to very different results in hypoxia imaging. (author)

  20. Suitable post processing algorithms for X-ray imaging using oversampled displaced multiple images

    International Nuclear Information System (INIS)

    Thim, J; Reza, S; Nawaz, K; Norlin, B; O'Nils, M; Oelmann, B

    2011-01-01

    X-ray imaging systems such as photon counting pixel detectors have a limited spatial resolution of the pixels, based on the complexity and processing technology of the readout electronics. For X-ray imaging situations where the features of interest are smaller than the imaging system pixel size, and the pixel size cannot be made smaller in the hardware, alternative means of resolution enhancement require to be considered. Oversampling with the usage of multiple displaced images, where the pixels of all images are mapped to a final resolution enhanced image, has proven a viable method of reaching a sub-pixel resolution exceeding the original resolution. The effectiveness of the oversampling method declines with the number of images taken, the sub-pixel resolution increases, but relative to a real reduction of imaging pixel sizes yielding a full resolution image, the perceived resolution from the sub-pixel oversampled image is lower. This is because the oversampling method introduces blurring noise into the mapped final images, and the blurring relative to full resolution images increases with the oversampling factor. One way of increasing the performance of the oversampling method is by sharpening the images in post processing. This paper focus on characterizing the performance increase of the oversampling method after the use of some suitable post processing filters, for digital X-ray images specifically. The results show that spatial domain filters and frequency domain filters of the same type yield indistinguishable results, which is to be expected. The results also show that the effectiveness of applying sharpening filters to oversampled multiple images increase with the number of images used (oversampling factor), leaving 60-80% of the original blurring noise after filtering a 6 x 6 mapped image (36 images taken), where the percentage is depending on the type of filter. This means that the effectiveness of the oversampling itself increase by using sharpening

  1. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  2. Rotation Covariant Image Processing for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Henrik Skibbe

    2013-01-01

    Full Text Available With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  3. Penn State astronomical image processing system

    International Nuclear Information System (INIS)

    Truax, R.J.; Nousek, J.A.; Feigelson, E.D.; Lonsdale, C.J.

    1987-01-01

    The needs of modern astronomy for image processing set demanding standards in simultaneously requiring fast computation speed, high-quality graphic display, large data storage, and interactive response. An innovative image processing system was designed, integrated, and used; it is based on a supermicro architecture which is tailored specifically for astronomy, which provides a highly cost-effective alternative to the traditional minicomputer installation. The paper describes the design rationale, equipment selection, and software developed to allow other astronomers with similar needs to benefit from the present experience. 9 references

  4. The parallel processing system for fast 3D-CT image reconstruction by circular shifting float memory architecture

    International Nuclear Information System (INIS)

    Wang Shi; Kang Kejun; Wang Jingjin

    1996-01-01

    Computerized Tomography (CT) is expected to become an inevitable diagnostic technique in the future. However, the long time required to reconstruct an image has been one of the major drawbacks associated with this technique. Parallel process is one of the best way to solve this problem. This paper gives the architecture, hardware and software design of PIRS-4 (4-processor Parallel Image Reconstruction System), which is a parallel processing system for fast 3D-CT image reconstruction by circular shifting float memory architecture. It includes the structure and components of the system, the design of crossbar switch and details of control model, the description of RPBP image reconstruction, the choice of OS (Operate System) and language, the principle of imitating EMS, direct memory R/W of float and programming in the protect model. Finally, the test results are given

  5. An ImageJ plugin for ion beam imaging and data processing at AIFIRA facility

    Energy Technology Data Exchange (ETDEWEB)

    Devès, G.; Daudin, L. [Univ. Bordeaux, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Bessy, A.; Buga, F.; Ghanty, J.; Naar, A.; Sommar, V. [Univ. Bordeaux, F-33170 Gradignan (France); Michelet, C.; Seznec, H.; Barberet, P. [Univ. Bordeaux, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France)

    2015-04-01

    Quantification and imaging of chemical elements at the cellular level requires the use of a combination of techniques such as micro-PIXE, micro-RBS, STIM, secondary electron imaging associated with optical and fluorescence microscopy techniques employed prior to irradiation. Such a numerous set of methods generates an important amount of data per experiment. Typically for each acquisition the following data has to be processed: chemical map for each element present with a concentration above the detection limit, density and backscattered maps, mean and local spectra corresponding to relevant region of interest such as whole cell, intracellular compartment, or nanoparticles. These operations are time consuming, repetitive and as such could be source of errors in data manipulation. In order to optimize data processing, we have developed a new tool for batch data processing and imaging. This tool has been developed as a plugin for ImageJ, a versatile software for image processing that is suitable for the treatment of basic IBA data operations. Because ImageJ is written in Java, the plugin can be used under Linux, Mas OS X and Windows in both 32-bits and 64-bits modes, which may interest developers working on open-access ion beam facilities like AIFIRA. The main features of this plugin are presented here: listfile processing, spectroscopic imaging, local information extraction, quantitative density maps and database management using OMERO.

  6. Process Evaluation of a Workers' Health Surveillance Program for Meat Processing Workers

    NARCIS (Netherlands)

    van Holland, Berry J; Brouwer, Sandra; de Boer, Michiel R; Reneman, Michiel F; Soer, Remko

    2017-01-01

    Objective To evaluate the implementation process of a workers' health surveillance (WHS) program in a Dutch meat processing company. Methods Workers from five plants were eligible to participate in the WHS program. The program consisted of four evaluative components and an intervention component.

  7. Process Evaluation of a Workers' Health Surveillance Program for Meat Processing Workers

    NARCIS (Netherlands)

    van Holland, Berry; Brouwer, Sandra; de Boer, Michiel R; Reneman, Michiel F; Soer, Remko

    Objective To evaluate the implementation process of a workers' health surveillance (WHS) program in a Dutch meat processing company. Methods Workers from five plants were eligible to participate in the WHS program. The program consisted of four evaluative components and an intervention component.

  8. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  9. Architecture Of High Speed Image Processing System

    Science.gov (United States)

    Konishi, Toshio; Hayashi, Hiroshi; Ohki, Tohru

    1988-01-01

    One of architectures for a high speed image processing system which corresponds to a new algorithm for a shape understanding is proposed. And the hardware system which is based on the archtecture was developed. Consideration points of the architecture are mainly that using processors should match with the processing sequence of the target image and that the developed system should be used practically in an industry. As the result, it was possible to perform each processing at a speed of 80 nano-seconds a pixel.

  10. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo

    2014-07-01

    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  11. Implementation of quality assurance program of the radiographic image at public hospitals of Aracaju-SE, Brazil

    International Nuclear Information System (INIS)

    Ferreira, C.C.; Souza, S.O.

    2007-01-01

    The initial conditions of the darkroom and X rays films processing of two public hospitals, called A and B hospitals, at Aracaju-SE city had been evaluated. This evaluation was performed to define the actions for an Quality Assurance Program (QAP) aiming to improve the radiographic images quality and costs reduction for both hospitals. The evaluation of the initial conditions of the hospital B showed that a large reduction in the film rejection index and in its radiographic image artifacts can be obtained after the QAP deeds be accomplished. In the A hospital, the actions that have been deployed by the AQP resulted in a reduction of about 50% of the costs associated to chemical products consumption and improved the radiographic image quality. It was demonstrated by a decrease in the radiographs rejection index of 7 % to 5%. The results also revealed that the presence of a medical physicist able to follow the QAP is essential, because, without him, the program, in general, is neglected. (author)

  12. A study of correlation technique on pyramid processed images

    Indian Academy of Sciences (India)

    generated according to the REDUCE function, defined as gk Еi, jЖ И REDUCE ... (Intel user's guide 1985). The application program is ... As the information content in the reduced image of size is 5 ┬ 5 negligible, the 5 ┬ 5 image case is not ...

  13. Digital image processing for radiography in nuclear power plants

    International Nuclear Information System (INIS)

    Heidt, H.; Rose, P.; Raabe, P.; Daum, W.

    1985-01-01

    With the help of digital processing of radiographic images from reactor-components it is possible to increase the security and objectiveness of the evaluation. Several examples of image processing procedures (contrast enhancement, density profiles, shading correction, digital filtering, superposition of images etc.) show the advantages for the visualization and evaluation of radiographs. Digital image processing can reduce some of the restrictions of radiography in nuclear power plants. In addition a higher degree of automation can be cost-saving and increase the quality of radiographic evaluation. The aim of the work performed was to to improve the readability of radiographs for the human observer. The main problem is lack of contrast and the presence of disturbing structures like weld seams. Digital image processing of film radiographs starts with the digitization of the image. Conventional systems use TV-cameras or scanners and provide a dynamic range of 1.5. to 3 density units, which are digitized to 256 grey levels. For the enhancement process it is necessary that the grey level range covers the density range of the important regions of the presented film. On the other hand the grey level coverage should not be wider than necessary to minimize the width of digitization steps. Poor digitization makes flaws and cracks invisible and spoils all further image processing

  14. Implementation of a program of quality assurance of image in an imaging system of flat panel portal

    International Nuclear Information System (INIS)

    Gomez Barrado, A.; Sanchez Jimenez, E.; Benitez, J. A.; Sanchez-Reyes, A.

    2013-01-01

    (IGRT) image-guided radiation therapy is the one in which images are used to locate the area of treatment. Modern irradiation systems are equipped with different modalities for obtaining images, such as flat panel systems, systems conebeam, tomoimagen, etc. This paper describes the start-up and the experience of a quality assurance program based on a flat panel portal Imaging System. (Author)

  15. IDAPS (Image Data Automated Processing System) System Description

    Science.gov (United States)

    1988-06-24

    This document describes the physical configuration and components used in the image processing system referred to as IDAPS (Image Data Automated ... Processing System). This system was developed by the Environmental Research Institute of Michigan (ERIM) for Eglin Air Force Base. The system is designed

  16. Defects quantization in industrial radiographs by image processing

    International Nuclear Information System (INIS)

    Briand, F.Y.; Brillault, B.; Philipp, S.

    1988-01-01

    This paper refers to the industrial application of image processing using Non Destructive Testing by radiography. The various problems involved by the conception of a numerical tool are described. This tool intends to help radiograph experts to quantify defects and to follow up their evolution, using numerical techniques. The sequences of processings that achieve defect segmentation and quantization are detailed. They are based on the thorough knowledge of radiographs formation techniques. The process uses various methods of image analysis, including textural analysis and morphological mathematics. The interface between the final product and users will occur in an explicit language, using the terms of radiographic expertise without showing any processing details. The problem is thoroughly described: image formation, digitization, processings fitted to flaw morphology and finally product structure in progress. 12 refs [fr

  17. ENDF/B-V processing programs

    International Nuclear Information System (INIS)

    DayDay, N.

    1980-07-01

    A description and operating instructions are supplied for the following ENDF/B-V Processing Programs: CHECKER, CRECT, STNDRD, FIZCON, PSYCHE, RESEND, INTER, INTEND, SUMRIZ, PLOTEF, LSTFCV, RIGEL. These programs can be obtained on magnetic tape, free of charge, from the IAEA Nuclear Data Section. (author)

  18. Software Engineering Program: Software Process Improvement Guidebook

    Science.gov (United States)

    1996-01-01

    The purpose of this document is to provide experience-based guidance in implementing a software process improvement program in any NASA software development or maintenance community. This guidebook details how to define, operate, and implement a working software process improvement program. It describes the concept of the software process improvement program and its basic organizational components. It then describes the structure, organization, and operation of the software process improvement program, illustrating all these concepts with specific NASA examples. The information presented in the document is derived from the experiences of several NASA software organizations, including the SEL, the SEAL, and the SORCE. Their experiences reflect many of the elements of software process improvement within NASA. This guidebook presents lessons learned in a form usable by anyone considering establishing a software process improvement program within his or her own environment. This guidebook attempts to balance general and detailed information. It provides material general enough to be usable by NASA organizations whose characteristics do not directly match those of the sources of the information and models presented herein. It also keeps the ideas sufficiently close to the sources of the practical experiences that have generated the models and information.

  19. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process

  20. Examining the Perceptions of Brand Images Regarding Competing MBA Programs

    Science.gov (United States)

    Hinds, Timothee; Falgoust, Dexter; Thomas, Kerry, Jr.; Budden, Michael C.

    2010-01-01

    In today's economic environment, it is crucial to create a strong, consistent brand image within a graduate business program. This study examines the perceptions that students at Southeastern Louisiana University hold about its MBA program and the MBA programs of its main competitors. A focus group was conducted to identify competitors and factors…

  1. Pattern recognition and expert image analysis systems in biomedical image processing (Invited Paper)

    Science.gov (United States)

    Oosterlinck, A.; Suetens, P.; Wu, Q.; Baird, M.; F. M., C.

    1987-09-01

    This paper gives an overview of pattern recoanition techniques (P.R.) used in biomedical image processing and problems related to the different P.R. solutions. Also the use of knowledge based systems to overcome P.R. difficulties, is described. This is illustrated by a common example ofabiomedical image processing application.

  2. Effects of image processing on the detective quantum efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na [Yonsei University, Wonju (Korea, Republic of)

    2010-02-15

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  3. Effects of image processing on the detective quantum efficiency

    International Nuclear Information System (INIS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-01-01

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  4. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  5. Image processing for medical diagnosis of human organs

    International Nuclear Information System (INIS)

    Tamura, Shin-ichi

    1989-01-01

    The report first describes expectations and needs for diagnostic imaging in the field of clinical medicine, radiation medicine in particular, viewed by the author as an image processing expert working at a medical institute. Then, medical image processing techniques are discussed in relation to advanced information processing techniques that are currently drawing much attention in the field of engineering. Finally, discussion is also made of practical applications of image processing techniques to diagnosis. In the field of clinical diagnosis, advanced equipment such as PACS (picture archiving and communication system) has come into wider use, and efforts have been made to shift from visual examination to more quantitative and objective diagnosis by means of such advanced systems. In clinical medicine, practical, robust systems are more useful than sophisticated ones. It is difficult, though important, to develop completely automatized diagnostic systems. The urgent, realistic goal, therefore, is to develop effective diagnosis support systems. In particular, operation support systems equipped with three-dimensional displays will be very useful. (N.K.)

  6. The Importance of Corporate Image in the Marketing of University Postgraduate Programs

    Directory of Open Access Journals (Sweden)

    Zainudin Awang

    2010-06-01

    Full Text Available The increase in the number of universities offering higher education has surpassed the increase in demand for higher education. Hence, the competition for potential customers, especially postgraduates, is very stiff. When the characteristics of service offering have become uniform, universities should be looking for their own competitive edge in the market to differentiate themselves from their competitors. This study attempts to investigate the influence of the corporate image of a university on the willingness of their outgoing undergraduates to continue postgraduate studies at the university (market retention and to recommend their friends, families, and employers to engage with the university (positive word of mouth. Data was collected randomly from 872 outgoing undergraduates using self-administered questionnaires. Factor analysis or corporate image items extracted three distinct dimensions. The study renamed these dimensions as the image of a university, image of its academic programs and the external recognition received by the university. As for the students’ loyalty, the factor analysis extracted two distinct dimensions. The study renamed these dimensions as market retention and positive recommendation. The study found that the corporate image of a university had a significant and direct effect on students’ intention to enroll in postgraduate programs at the university. Among the influential corporate image dimensions are the image of a university and the image of its academic programs. The corporate image of a university provides a significant influence on positive word of mouth and recommendations by outgoing undergraduates to potential postgraduate customers to register with the university. The results have significant implications for the long-term growth and survival of postgraduate programs.

  7. Massively parallel data processing for quantitative total flow imaging with optical coherence microscopy and tomography

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo

    2017-08-01

    We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU

  8. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  9. Image processing system for flow pattern measurements

    International Nuclear Information System (INIS)

    Ushijima, Satoru; Miyanaga, Yoichi; Takeda, Hirofumi

    1989-01-01

    This paper describes the development and application of an image processing system for measurements of flow patterns occuring in natural circulation water flows. In this method, the motions of particles scattered in the flow are visualized by a laser light slit and they are recorded on normal video tapes. These image data are converted to digital data with an image processor and then transfered to a large computer. The center points and pathlines of the particle images are numerically analized, and velocity vectors are obtained with these results. In this image processing system, velocity vectors in a vertical plane are measured simultaneously, so that the two dimensional behaviors of various eddies, with low velocity and complicated flow patterns usually observed in natural circulation flows, can be determined almost quantitatively. The measured flow patterns, which were obtained from natural circulation flow experiments, agreed with photographs of the particle movements, and the validity of this measuring system was confirmed in this study. (author)

  10. Image processing for HTS SQUID probe microscope

    International Nuclear Information System (INIS)

    Hayashi, T.; Koetitz, R.; Itozaki, H.; Ishikawa, T.; Kawabe, U.

    2005-01-01

    An HTS SQUID probe microscope has been developed using a high-permeability needle to enable high spatial resolution measurement of samples in air even at room temperature. Image processing techniques have also been developed to improve the magnetic field images obtained from the microscope. Artifacts in the data occur due to electromagnetic interference from electric power lines, line drift and flux trapping. The electromagnetic interference could successfully be removed by eliminating the noise peaks from the power spectrum of fast Fourier transforms of line scans of the image. The drift between lines was removed by interpolating the mean field value of each scan line. Artifacts in line scans occurring due to flux trapping or unexpected noise were removed by the detection of a sharp drift and interpolation using the line data of neighboring lines. Highly detailed magnetic field images were obtained from the HTS SQUID probe microscope by the application of these image processing techniques

  11. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.

    2018-01-09

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  12. Current status on image processing in medical fields in Japan

    International Nuclear Information System (INIS)

    Atsumi, Kazuhiko

    1979-01-01

    Information on medical images are classified in the two patterns. 1) off-line images on films-x-ray films, cell image, chromosome image etc. 2) on-line images detected through sensors, RI image, ultrasonic image, thermogram etc. These images are divided into three characteristic, two dimensional three dimensional and dynamic images. The research on medical image processing have been reported in several meeting in Japan and many fields on images have been studied on RI, thermogram, x-ray film, x-ray-TV image, cancer cell, blood cell, bacteria, chromosome, ultrasonics, and vascular image. Processing on TI image useful and easy because of their digital displays. Software on smoothing, restoration (iterative approximation), fourier transformation, differentiation and subtration. Image on stomach and chest x-ray films have been processed automatically utilizing computer system. Computed Tomography apparatuses have been already developed in Japan and automated screening instruments on cancer cells and recently on blood cells classification have been also developed. Acoustical holography imaging and moire topography have been also studied in Japan. (author)

  13. Image Segmentation and Processing for Efficient Parking Space Analysis

    OpenAIRE

    Tutika, Chetan Sai; Vallapaneni, Charan; R, Karthik; KP, Bharath; Muthu, N Ruban Rajesh Kumar

    2018-01-01

    In this paper, we develop a method to detect vacant parking spaces in an environment with unclear segments and contours with the help of MATLAB image processing capabilities. Due to the anomalies present in the parking spaces, such as uneven illumination, distorted slot lines and overlapping of cars. The present-day conventional algorithms have difficulties processing the image for accurate results. The algorithm proposed uses a combination of image pre-processing and false contour detection ...

  14. Fingerprint image enhancement by differential hysteresis processing.

    Science.gov (United States)

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  15. Advanced Pediatric Brain Imaging Research Program

    Science.gov (United States)

    2016-10-01

    pediatric magnetic resonance imaging ( MRI ) techniques are revolutionizing our understanding of brain injury, its potential for recovery, and...training program, advanced MRI , brain injury. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE...is located at www.MilitaryMedED.com. The site can be accessed from any device web browser (personal computer, tablet or phone) and operating system

  16. Auditing fetal nasal bone images in the first trimester of pregnancy: results from a peer review program.

    Science.gov (United States)

    Palermo, Fernanda Gasparin; Albuquerque, Débora de Paula Soares de Medeiros; Martins, Wellington P; Araujo Júnior, Edward; Bruns, Rafael Frederico

    2016-09-01

    To establish a structured review process to facilitate the identification of the fetal nasal bone (NB) in the first trimester ultrasound scan to improve the quality images. We conducted a retrospective observational study in fetal NB images obtained during ultrasound exams of singleton pregnancies that underwent first trimester screening (crown-rump length 45-84 mm). When the images were obtained the examiner was not aware of the study. Audit was conducted by an examiner according criteria established by the Fetal Medicine Foundation. Fetal NB images were assessed regarding adequate magnification, mid-sagittal view and transducer held parallel to the direction of the nose. The transvaginal and transabdominal as well as anterior and posterior fetal back groups were compared using χ(2) test. We considered 874 fetal NB images for auditing. Fetal NB was considered present in 865 images (99%). During the audit process, we identified 72 (8.2%) cases of disagreement between examiner and auditor assessments. Disagreement was higher when image quality was poor (62 cases = 7%). Transvaginal approach performed better in the following criteria: adequate magnification (p audit program for fetal NB is feasible in a clinical scenario. Image quality appears to play an important role in compliance to image standards audited and in agreement between examiner and auditor.

  17. Profiling and sorting Mangifera Indica morphology for quality attributes and grade standards using integrated image processing algorithms

    Science.gov (United States)

    Balbin, Jessie R.; Fausto, Janette C.; Janabajab, John Michael M.; Malicdem, Daryl James L.; Marcelo, Reginald N.; Santos, Jan Jeffrey Z.

    2017-06-01

    Mango production is highly vital in the Philippines. It is very essential in the food industry as it is being used in markets and restaurants daily. The quality of mangoes can affect the income of a mango farmer, thus incorrect time of harvesting will result to loss of quality mangoes and income. Scientific farming is much needed nowadays together with new gadgets because wastage of mangoes increase annually due to uncouth quality. This research paper focuses on profiling and sorting of Mangifera Indica using image processing techniques and pattern recognition. The image of a mango is captured on a weekly basis from its early stage. In this study, the researchers monitor the growth and color transition of a mango for profiling purposes. Actual dimensions of the mango are determined through image conversion and determination of pixel and RGB values covered through MATLAB. A program is developed to determine the range of the maximum size of a standard ripe mango. Hue, light, saturation (HSL) correction is used in the filtering process to assure the exactness of RGB values of a mango subject. By pattern recognition technique, the program can determine if a mango is standard and ready to be exported.

  18. Matching rendered and real world images by digital image processing

    Science.gov (United States)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  19. Processing of space images and geologic interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Yudin, V S

    1981-01-01

    Using data for standard sections, a correlation was established between natural formations in geologic/geophysical dimensions and the form they take in the imaging. With computer processing, important data can be derived from the image. Use of the above correlations has allowed to make a number of preliminary classifications of tectonic structures, and to determine certain ongoing processes in the given section. The derived data may be used for search of useful minerals.

  20. Advances in the Application of Image Processing Fruit Grading

    OpenAIRE

    Fang , Chengjun; Hua , Chunjian

    2013-01-01

    International audience; In the perspective of actual production, the paper presents the advances in the application of image processing fruit grading from several aspects, such as processing precision and processing speed of image processing technology. Furthermore, the different algorithms about detecting size, shape, color and defects are combined effectively to reduce the complexity of each algorithm and achieve a balance between the processing precision and processing speed are keys to au...

  1. A computer program for planimetric analysis of digitized images

    DEFF Research Database (Denmark)

    Lynnerup, N; Lynnerup, O; Homøe, P

    1992-01-01

    bones as seen on X-rays. By placing the X-rays on a digitizer tablet and tracing the outline of the cell system, the area was calculated by the program. The calculated data and traced images could be stored and printed. The program is written in BASIC; necessary hardware is an IBM-compatible personal...

  2. Generation of a command language for nuclear signal and image processing on the basis of a general interactive system

    International Nuclear Information System (INIS)

    Pretschner, D.P.; Pfeiffer, G.; Deutsches Elektronen-Sychnchrotron

    1981-01-01

    In the field of nuclear medicine, BASIC and FORTRAN are currently being favoured as higher-level programming languages for computer-aided signal processing, and most operating systems of so-called ''freely programmable analyzers'' in nuclear wards have compilers for this purpose. However, FORTRAN is not an interactive language and thus not suited for conversational computing as a man-machine interface. BASIC, on the other hand, although a useful starting language for beginners, is not sufficiently sophisticated for complex nuclear medicine problems involving detailed calculations. Integration of new methods of signal acquisition, processing and presentation into an existing system or generation of new systems is difficult in FORTRAN, BASIC or ASSEMBLER and can only be done by system specialists, not by nuclear physicians. This problem may be solved by suitable interactive systems that are easy to learn, flexible, transparent and user-friendly. An interactive system of this type, XDS, was developed in the course of a project on evaluation of radiological image sequences. An XDS-generated command processing system for signal and image processing in nuclear medicine is described. The system is characterized by interactive program development and execution, problem-relevant data types, a flexible procedure concept and an integrated system implementation language for modern image processing algorithms. The advantages of the interactive system are illustrated by an example of diagnosis by nuclear methods. (orig.) [de

  3. Evaluation of processing methods for static radioisotope scan images

    International Nuclear Information System (INIS)

    Oakberg, J.A.

    1976-12-01

    Radioisotope scanning in the field of nuclear medicine provides a method for the mapping of a radioactive drug in the human body to produce maps (images) which prove useful in detecting abnormalities in vital organs. At best, radioisotope scanning methods produce images with poor counting statistics. One solution to improving the body scan images is using dedicated small computers with appropriate software to process the scan data. Eleven methods for processing image data are compared

  4. Digital image processing in NDT : Application to industrial radiography

    International Nuclear Information System (INIS)

    Aguirre, J.; Gonzales, C.; Pereira, D.

    1988-01-01

    Digital image processing techniques are applied to image enhancement discontinuity detection and characterization is radiographic test. Processing is performed mainly by image histogram modification, edge enhancement, texture and user interactive segmentation. Implementation was achieved in a microcomputer with video image capture system. Results are compared with those obtained through more specialized equipment main frame computers and high precision mechanical scanning digitisers. Procedures are intended as a precious stage for automatic defect detection

  5. Quantum Computation-Based Image Representation, Processing Operations and Their Applications

    Directory of Open Access Journals (Sweden)

    Fei Yan

    2014-10-01

    Full Text Available A flexible representation of quantum images (FRQI was proposed to facilitate the extension of classical (non-quantum-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI, quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers.

  6. An invertebrate embryologist's guide to routine processing of confocal images.

    Science.gov (United States)

    von Dassow, George

    2014-01-01

    It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.

  7. Development of X-ray radiography examination technology by image processing method

    Energy Technology Data Exchange (ETDEWEB)

    Min, Duck Kee; Koo, Dae Seo; Kim, Eun Ka [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-06-01

    Because the dimension of nuclear fuel rods was measured with rapidity and accuracy by X-ray radiography examination, the set-up of image processing system which was composed of 979 CCD-L camera, image processing card and fluorescent lighting was carried out, and the image processing system enabled image processing to perform. The examination technology of X-ray radiography, which enabled dimension measurement of nuclear fuel rods to perform, was developed by image processing method. The result of dimension measurement of standard fuel rod by image processing method was 2% reduction in relative measuring error than that of X-ray radiography film, while the former was better by 100 {approx} 200 {mu}m in measuring accuracy than the latter. (author). 9 refs., 22 figs., 3 tabs.

  8. Roles of medical image processing in medical physics

    International Nuclear Information System (INIS)

    Arimura, Hidetaka

    2011-01-01

    Image processing techniques including pattern recognition techniques play important roles in high precision diagnosis and radiation therapy. The author reviews a symposium on medical image information, which was held in the 100th Memorial Annual Meeting of the Japan Society of Medical Physics from September 23rd to 25th. In this symposium, we had three invited speakers, Dr. Akinobu Shimizu, Dr. Hideaki Haneishi, and Dr. Hirohito Mekata, who are active engineering researchers of segmentation, image registration, and pattern recognition, respectively. In this paper, the author reviews the roles of the medical imaging processing in medical physics field, and the talks of the three invited speakers. (author)

  9. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  11. Performance Measure as Feedback Variable in Image Processing

    Directory of Open Access Journals (Sweden)

    Ristić Danijela

    2006-01-01

    Full Text Available This paper extends the view of image processing performance measure presenting the use of this measure as an actual value in a feedback structure. The idea behind is that the control loop, which is built in that way, drives the actual feedback value to a given set point. Since the performance measure depends explicitly on the application, the inclusion of feedback structures and choice of appropriate feedback variables are presented on example of optical character recognition in industrial application. Metrics for quantification of performance at different image processing levels are discussed. The issues that those metrics should address from both image processing and control point of view are considered. The performance measures of individual processing algorithms that form a character recognition system are determined with respect to the overall system performance.

  12. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  13. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques.

    Science.gov (United States)

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan

    2010-07-01

    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  14. IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING

    Energy Technology Data Exchange (ETDEWEB)

    Erwin, Peter [Max-Planck-Insitut für Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching, GermanyAND (Germany); Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München (Germany)

    2015-02-01

    I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.

  15. A gamma cammera image processing system

    International Nuclear Information System (INIS)

    Chen Weihua; Mei Jufang; Jiang Wenchuan; Guo Zhenxiang

    1987-01-01

    A microcomputer based gamma camera image processing system has been introduced. Comparing with other systems, the feature of this system is that an inexpensive microcomputer has been combined with specially developed hardware, such as, data acquisition controller, data processor and dynamic display controller, ect. Thus the process of picture processing has been speeded up and the function expense ratio of the system raised

  16. Intensity-dependent point spread image processing

    International Nuclear Information System (INIS)

    Cornsweet, T.N.; Yellott, J.I.

    1984-01-01

    There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction

  17. Image processing in radiology. Current applications

    International Nuclear Information System (INIS)

    Neri, E.; Caramella, D.; Bartolozzi, C.

    2008-01-01

    Few fields have witnessed such impressive advances as image processing in radiology. The progress achieved has revolutionized diagnosis and greatly facilitated treatment selection and accurate planning of procedures. This book, written by leading experts from many countries, provides a comprehensive and up-to-date description of how to use 2D and 3D processing tools in clinical radiology. The first section covers a wide range of technical aspects in an informative way. This is followed by the main section, in which the principal clinical applications are described and discussed in depth. To complete the picture, a third section focuses on various special topics. The book will be invaluable to radiologists of any subspecialty who work with CT and MRI and would like to exploit the advantages of image processing techniques. It also addresses the needs of radiographers who cooperate with clinical radiologists and should improve their ability to generate the appropriate 2D and 3D processing. (orig.)

  18. Low level image processing techniques using the pipeline image processing engine in the flight telerobotic servicer

    Science.gov (United States)

    Nashman, Marilyn; Chaconas, Karen J.

    1988-01-01

    The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.

  19. An Automated, Image Processing System for Concrete Evaluation

    International Nuclear Information System (INIS)

    Baumgart, C.W.; Cave, S.P.; Linder, K.E.

    1998-01-01

    Allied Signal Federal Manufacturing ampersand Technologies (FM ampersand T) was asked to perform a proof-of-concept study for the Missouri Highway and Transportation Department (MHTD), Research Division, in June 1997. The goal of this proof-of-concept study was to ascertain if automated scanning and imaging techniques might be applied effectively to the problem of concrete evaluation. In the current evaluation process, a concrete sample core is manually scanned under a microscope. Voids (or air spaces) within the concrete are then detected visually by a human operator by incrementing the sample under the cross-hairs of a microscope and by counting the number of ''pixels'' which fall within a void. Automation of the scanning and image analysis processes is desired to improve the speed of the scanning process, to improve evaluation consistency, and to reduce operator fatigue. An initial, proof-of-concept image analysis approach was successfully developed and demonstrated using acquired black and white imagery of concrete samples. In this paper, the automated scanning and image capture system currently under development will be described and the image processing approach developed for the proof-of-concept study will be demonstrated. A development update and plans for future enhancements are also presented

  20. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  1. Real-time progressive hyperspectral image processing endmember finding and anomaly detection

    CERN Document Server

    Chang, Chein-I

    2016-01-01

    The book covers the most crucial parts of real-time hyperspectral image processing: causality and real-time capability. Recently, two new concepts of real time hyperspectral image processing, Progressive Hyperspectral Imaging (PHSI) and Recursive Hyperspectral Imaging (RHSI). Both of these can be used to design algorithms and also form an integral part of real time hyperpsectral image processing. This book focuses on progressive nature in algorithms on their real-time and causal processing implementation in two major applications, endmember finding and anomaly detection, both of which are fundamental tasks in hyperspectral imaging but generally not encountered in multispectral imaging. This book is written to particularly address PHSI in real time processing, while a book, Recursive Hyperspectral Sample and Band Processing: Algorithm Architecture and Implementation (Springer 2016) can be considered as its companion book. Includes preliminary background which is essential to those who work in hyperspectral ima...

  2. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    Science.gov (United States)

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA

  3. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique

    Science.gov (United States)

    2015-01-01

    Background DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. Results We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. Conclusions This work presents an

  4. Programmed Multi-Image Lectures for College Biology Instruction.

    Science.gov (United States)

    Jensen, William A.; Knauft, Robert L.

    1977-01-01

    Discusses the use of a programed multi-image lecture approach for teaching a botany course to nonmajor students at the University of California, Berkeley. Also considers the advantages, production, method of presentation, and design of the multimedia lectures. (HM)

  5. Effects of optimization and image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Kheddache, S.; Maansson, L.G.; Angelhed, J.E.; Denbratt, L.; Gottfridsson, B.; Schlossman, D.

    1991-01-01

    A digital system for chest radiography based on a large image intensifier was compared to a conventional film-screen system. The digital system was optimized with regard to spatial and contrast resolution and dose. The images were digitally processed for contrast and edge enhancement. A simulated pneumothorax and two and two simulated nodules were positioned over the lungs and the mediastinum of an anthro-pomorphic phantom. Observer performance was evaluated with Receiver Operating Characteristic (ROC) analysis. Five observers assessed the processed digital images and the conventional full-size radiographs. The time spent viewing the full-size radiographs and the digital images was recorded. For the simulated pneumothorax, the results showed perfect performance for the full-size radiographs and detectability was high also for the processed digital images. No significant differences in the detectability of the simulated nodules was seen between the two imaging systems. The results for the digital images showed a significantly improved detectability for the nodules in the mediastinum as compared to a previous ROC study where no optimization and image processing was available. No significant difference in detectability was seen between the former and the present ROC study for small nodules in the lung. No difference was seen in the time spent assessing the conventional full-size radiographs and the digital images. The study indicates that processed digital images produced by a large image intensifier are equal in image quality to conventional full-size radiographs for low-contrast objects such as nodules. (author). 38 refs.; 4 figs.; 1 tab

  6. Processing Infrared Images For Fire Management Applications

    Science.gov (United States)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  7. iMAGE cloud: medical image processing as a service for regional healthcare in a hybrid cloud environment.

    Science.gov (United States)

    Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen

    2016-11-01

    To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.

  8. Test bed for real-time image acquisition and processing systems based on FlexRIO, CameraLink, and EPICS

    International Nuclear Information System (INIS)

    Barrera, E.; Ruiz, M.; Sanz, D.; Vega, J.; Castro, R.; Juárez, E.; Salvador, R.

    2014-01-01

    Highlights: • The test bed allows for the validation of real-time image processing techniques. • Offers FPGA (FlexRIO) image processing that does not require CPU intervention. • Is fully compatible with the architecture of the ITER Fast Controllers. • Provides flexibility and easy integration in distributed experiments based on EPICS. - Abstract: Image diagnostics are becoming standard ones in nuclear fusion. At present, images are typically analyzed off-line. However, real-time processing is occasionally required (for instance, hot-spot detection or pattern recognition tasks), which will be the objective for the next generation of fusion devices. In this paper, a test bed for image generation, acquisition, and real-time processing is presented. The proposed solution is built using a Camera Link simulator, a Camera Link frame-grabber, a PXIe chassis, and offers software interface with EPICS. The Camera Link simulator (PCIe card PCIe8 DVa C-Link from Engineering Design Team) generates simulated image data (for example, from video-movies stored in fusion databases) using a Camera Link interface to mimic the frame sequences produced with diagnostic cameras. The Camera Link frame-grabber (FlexRIO Solution from National Instruments) includes a field programmable gate array (FPGA) for image acquisition using a Camera Link interface; the FPGA allows for the codification of ad-hoc image processing algorithms using LabVIEW/FPGA software. The frame grabber is integrated in a PXIe chassis with system architecture similar to that of the ITER Fast Controllers, and the frame grabber provides a software interface with EPICS to program all of its functionalities, capture the images, and perform the required image processing. The use of these four elements allows for the implementation of a test bed system that permits the development and validation of real-time image processing techniques in an architecture that is fully compatible with that of the ITER Fast Controllers

  9. A theory-informed, process-oriented Resident Scholarship Program.

    Science.gov (United States)

    Thammasitboon, Satid; Darby, John B; Hair, Amy B; Rose, Karen M; Ward, Mark A; Turner, Teri L; Balmer, Dorene F

    2016-01-01

    The Accreditation Council for Graduate Medical Education requires residency programs to provide curricula for residents to engage in scholarly activities but does not specify particular guidelines for instruction. We propose a Resident Scholarship Program that is framed by the self-determination theory (SDT) and emphasize the process of scholarly activity versus a scholarly product. The authors report on their longitudinal Resident Scholarship Program, which aimed to support psychological needs central to SDT: autonomy, competence, and relatedness. By addressing those needs in program aims and program components, the program may foster residents' intrinsic motivation to learn and to engage in scholarly activity. To this end, residents' engagement in scholarly processes, and changes in perceived autonomy, competence, and relatedness were assessed. Residents engaged in a range of scholarly projects and expressed positive regard for the program. Compared to before residency, residents felt more confident in the process of scholarly activity, as determined by changes in increased perceived autonomy, competence, and relatedness. Scholarly products were accomplished in return for a focus on scholarly process. Based on our experience, and in line with the SDT, supporting residents' autonomy, competence, and relatedness through a process-oriented scholarship program may foster the curiosity, inquisitiveness, and internal motivation to learn that drives scholarly activity and ultimately the production of scholarly products.

  10. High-performance method of morphological medical image processing

    Directory of Open Access Journals (Sweden)

    Ryabykh M. S.

    2016-07-01

    Full Text Available the article shows the implementation of grayscale morphology vHGW algorithm for selection borders in the medical image. Image processing is executed using OpenMP and NVIDIA CUDA technology for images with different resolution and different size of the structuring element.

  11. Spatially assisted down-track median filter for GPR image post-processing

    Science.gov (United States)

    Paglieroni, David W; Beer, N Reginald

    2014-10-07

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  12. The Effectiveness of a Body Image Group Counselling Program on Adolescent Girls in High School

    Directory of Open Access Journals (Sweden)

    Çiğdem Keven-Akliman

    2017-04-01

    Full Text Available The aim of this study was to investigate the effect of a body image group counselling program on adolescent girls in the context of Positive Psychology and Positive Psychotherapy. The sample set consisted of 22 female students, 11 in the experimental group and the other 11 in the control group, and they are all in the 9th grade and have negative body images. Research was conducted based on a pattern of a pre-test, post-test and follow-up test with control and experimental groups. A Body Image Scale, Body Image Coping Strategies Inventory and Program Evaluation Form were used as data collection tools. The positive body image group counselling program was conducted on the experimental group for ten sessions. The Wilcoxon sign rank test, Mann-Whitney U test and repeated-measure of analysis of variance (ANOVA were used for data analysis. According to the results, the program was statistically significant. Additionally, the results revealed statistically significant increases in positive body images and coping strategies in the intervention group when compared with the control group. The program may be used to increase positive body image.

  13. Measurement of smaller colon polyp in CT colonography images using morphological image processing.

    Science.gov (United States)

    Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K

    2017-11-01

    Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].

  14. Mixed-signal early vision chip with embedded image and programming memories and digital I/O

    Science.gov (United States)

    Linan-Cembrano, Gustavo; Rodriguez-Vazquez, Angel; Dominguez-Castro, Rafael; Espejo, Servando

    2003-04-01

    From a system level perspective, this paper presents a 128x128 flexible and reconfigurable Focal-Plane Analog Programmable Array Processor, which has been designed as a single chip in a 0.35μm standard digital 1P-5M CMOS technology. The core processing array has been designed to achieve high-speed of operation and large-enough accuracy (~7bit) with low power consumption. The chip includes on-chip program memory to allow for the execution of complex, sequential and/or bifurcation flow image processing algorithms. It also includes the structures and circuits needed to guarantee its embedding into conventional digital hosting systems: external data interchange and control are completely digital. The chip contains close to four million transistors, 90% of them working in analog mode. The chip features up to 330GOPs (Giga Operations per second), and uses the power supply (180GOP/Joule) and the silicon area (3.8 GOPS/mm2) efficiently, as it is able to maintain VGA processing throughputs of 100Frames/s with about 15 basic image processing tasks on each frame.

  15. Determination of Hydrodynamic Parameters on Two--Phase Flow Gas - Liquid in Pipes with Different Inclination Angles Using Image Processing Algorithm

    Science.gov (United States)

    Montoya, Gustavo; Valecillos, María; Romero, Carlos; Gonzáles, Dosinda

    2009-11-01

    In the present research a digital image processing-based automated algorithm was developed in order to determine the phase's height, hold up, and statistical distribution of the drop size in a two-phase system water-air using pipes with 0 , 10 , and 90 of inclination. Digital images were acquired with a high speed camera (up to 4500fps), using an equipment that consist of a system with three acrylic pipes with diameters of 1.905, 3.175, and 4.445 cm. Each pipe is arranged in two sections of 8 m of length. Various flow patterns were visualized for different superficial velocities of water and air. Finally, using the image processing program designed in Matlab/Simulink^, the captured images were processed to establish the parameters previously mentioned. The image processing algorithm is based in the frequency domain analysis of the source pictures, which allows to find the phase as the edge between the water and air, through a Sobel filter that extracts the high frequency components of the image. The drop size was found using the calculation of the Feret diameter. Three flow patterns were observed: Annular, ST, and ST&MI.

  16. Assessment of diabetic teleretinal imaging program at the Portland Department of Veterans Affairs Medical Center.

    Science.gov (United States)

    Tsan, Grace L; Hoban, Keely L; Jun, Weon; Riedel, Kevin J; Pedersen, Amy L; Hayes, John

    2015-01-01

    We conducted a retrospective chart review of 200 diabetic patients who had teleretinal imaging performed between January 1, 2010, and January 1, 2011, at Portland Department of Veterans Affairs (VA) Medical Center outpatient clinics to assess the effectiveness of the diabetic teleretinal imaging program. Twenty patients (10%) had diabetic retinopathy. Ninety percent of the available teleretinal imaging studies were of adequate quality for interpretation. In accordance with local VA policy at that time, all teleretinal imaging patients should have been referred for a dilated retinal examination the following year. Image readers referred 97.5% of the patients to eye clinics for subsequent eye examinations, but the imagers scheduled appointments for only 80% of these patients. The redundancy rate, i.e., patients who had an eye examination within the past 6 mo, was 11%; the duplicate recall rate, i.e., patients who had a second teleretinal imaging performed within 1 yr of the eye examination, was 37%. Rates of timely diabetic eye examinations at clinics with teleretinal imaging programs, particularly when teleretinal imaging and eye clinics were colocated at the same community-based outpatient clinic, were higher than those without a teleretinal imaging program. We concluded that the Portland VA Medical Center's teleretinal imaging program was successful in increasing the screening rate for diabetic retinopathy.

  17. Enhancement of dental x-ray images by two channel image processing

    International Nuclear Information System (INIS)

    Mitra, S.; Yu, T.H.

    1991-01-01

    In this paper, the authors develop a new algorithm for the enhancement of low-contrast details of dental X-ray images using a two channel structure. The algorithm first decomposes an input image in the frequency domain into two parts by filtering: one containing the low frequency components and the other containing the high frequency components. Then these parts are enhanced separately using a transform magnitude modifier. Finally a contrast enhanced image is formed by combining these two processed pats. The performance of the proposed algorithm is illustrated through enhancement of dental X-ray images. The algorithm can be easily implemented on a personal computer

  18. Subband/Transform MATLAB Functions For Processing Images

    Science.gov (United States)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  19. Process Evaluation of a Workers' Health Surveillance Program for Meat Processing Workers.

    Science.gov (United States)

    van Holland, Berry J; Brouwer, Sandra; de Boer, Michiel R; Reneman, Michiel F; Soer, Remko

    2017-09-01

    Objective To evaluate the implementation process of a workers' health surveillance (WHS) program in a Dutch meat processing company. Methods Workers from five plants were eligible to participate in the WHS program. The program consisted of four evaluative components and an intervention component. Qualitative and quantitative methods were used to evaluate seven process aspects. Data were gathered by interviews with stakeholders, participant questionnaires, and from registries of the company and occupational health service. Results Two recruitment strategies were used: open invitation or automatic participation. Of the 986 eligible workers, 305 participated in the program. Average reach was 53 %. Two out of five program components could not be assessed on dose delivered, dose received and fidelity. If components were assessable, 85-100 % of the components was delivered, 66-100 % of the components was received by participants, and fidelity was 100 %. Participants were satisfied with the WHS program (mean score 7.6). Contextual factors that facilitated implementation were among others societal developments and management support. Factors that formed barriers were program novelty and delayed follow-up. Conclusion The WHS program was well received by participants. Not all participants were offered the same number of program components, and not all components were performed according to protocol. Deviation from protocol is an indication of program failure and may affect program effectiveness.

  20. Image processing tensor transform and discrete tomography with Matlab

    CERN Document Server

    Grigoryan, Artyom M

    2012-01-01

    Focusing on mathematical methods in computer tomography, Image Processing: Tensor Transform and Discrete Tomography with MATLAB(R) introduces novel approaches to help in solving the problem of image reconstruction on the Cartesian lattice. Specifically, it discusses methods of image processing along parallel rays to more quickly and accurately reconstruct images from a finite number of projections, thereby avoiding overradiation of the body during a computed tomography (CT) scan. The book presents several new ideas, concepts, and methods, many of which have not been published elsewhere. New co

  1. Digital processing of SEM images for the assessment of evaluation indexes of cleaning interventions on Pentelic marble surfaces

    International Nuclear Information System (INIS)

    Moropoulou, A.; Delegou, E.T.; Vlahakis, V.; Karaviti, E.

    2007-01-01

    In this work, digital processing of scanning-electron-microscopy images utilized to assess cleaning interventions applied on the Pentelic marble surfaces of the National Archaeological Museum and National Library in Athens, Greece. Beside mineralogical and chemical characterization that took place by scanning-electron-microscopy with Energy Dispersive X-ray Spectroscopy, the image-analysis program EDGE was applied for estimating three evaluation indexes of the marble micro-structure. The EDGE program was developed by the U.S. Geological Survey for the evaluation of cleaning interventions applied on Philadelphia City Hall. This computer program analyzes scanning-electron-microscopy images of stone specimens cut in cross-section for measuring the fractal dimension of the exposed surfaces, the stone near-surface fracture density, the shape factor (a surface roughness factor) and the friability index which represents the physico-chemical and physico-mechanical stability of the stone surface. The results indicated that the evaluation of the marble surface micro-structure before and after cleaning is achieved by the suggested indexes, while the performance of cleaning interventions on the marble surfaces can be assessed

  2. New real-time image processing system for IRFPA

    Institute of Scientific and Technical Information of China (English)

    WANG Bing-jian; LIU Shang-qian; CHENG Yu-bao

    2006-01-01

    Influenced by detectors' material,manufacturing technology etc,every detector in infrared focal plane array (IRFPA) will output different voltages even if their input radiation flux is the same.And this is called non-uniformity of IRFPA.At the same time,the high background temperature,low temperature difference between targets and background and the low responsivity of IRFPA result in low contrast of infrared images.So non-uniformity correction and image enhancement are important techniques for IRFPA imaging system.This paper proposes a new real-time infrared image processing system based on Field Programmable Gate Array(FPGA).The system implements non-uniformity correction,image enhancement and video synthesization etc.By using parallel architecture and pipeline technique,the system processing speed is as high as 50Mx12bits per second.It is appropriate greatly to a large IRFPA and a high frame frequency IRFPA imaging system.The system is miniatured in one FPGA.

  3. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG

    2016-02-01

    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  4. Medical image processing on the GPU - past, present and future.

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. PROCESSING, CATALOGUING AND DISTRIBUTION OF UAS IMAGES IN NEAR REAL TIME

    Directory of Open Access Journals (Sweden)

    I. Runkel

    2013-08-01

    Full Text Available Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications – where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security

  6. Processing, Cataloguing and Distribution of Uas Images in Near Real Time

    Science.gov (United States)

    Runkel, I.

    2013-08-01

    Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images

  7. Image processing applications: From particle physics to society

    International Nuclear Information System (INIS)

    Sotiropoulou, C.-L.; Citraro, S.; Dell'Orso, M.; Luciano, P.; Gkaitatzis, S.; Giannetti, P.

    2017-01-01

    We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.

  8. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  9. The Study of Image Processing Method for AIDS PA Test

    International Nuclear Information System (INIS)

    Zhang, H J; Wang, Q G

    2006-01-01

    At present, the main test technique of AIDS is PA in China. Because the judgment of PA test image is still depending on operator, the error ration is high. To resolve this problem, we present a new technique of image processing, which first process many samples and get the data including coordinate of center and the rang of kinds images; then we can segment the image with the data; at last, the result is exported after data was judgment. This technique is simple and veracious; and it also turns out to be suitable for the processing and analyzing of other infectious diseases' PA test image

  10. Diversification in an image retrieval system based on text and image processing

    Directory of Open Access Journals (Sweden)

    Adrian Iftene

    2014-11-01

    Full Text Available In this paper we present an image retrieval system created within the research project MUCKE (Multimedia and User Credibility Knowledge Extraction, a CHIST-ERA research project where UAIC{\\footnote{"Alexandru Ioan Cuza" University of Iasi}} is one of the partners{\\footnote{Together with Technical University from Wienna, Austria, CEA-LIST Institute from Paris, France and BILKENT University from Ankara, Turkey}}. Our discussion in this work will focus mainly on components that are part of our image retrieval system proposed in MUCKE, and we present the work done by the UAIC group. MUCKE incorporates modules for processing multimedia content in different modes and languages (like English, French, German and Romanian and UAIC is responsible with text processing tasks (for Romanian and English. One of the problems addressed by our work is related to search results diversification. In order to solve this problem, we first process the user queries in both languages and secondly, we create clusters of similar images.

  11. Development of an image processing system at the Technology Applications Center, UNM: Landsat image processing in mineral exploration and related activities. Final report

    International Nuclear Information System (INIS)

    Budge, T.K.

    1980-09-01

    This project was a demonstration of the capabilities of Landsat satellite image processing applied to the monitoring of mining activity in New Mexico. Study areas included the Navajo coal surface mine, the Jackpile uranium surface mine, and the potash mining district near Carlsbad, New Mexico. Computer classifications of a number of land use categories in these mines were presented and discussed. A literature review of a number of case studies concerning the use of Landsat image processing in mineral exploration and related activities was prepared. Included in this review is a discussion of the Landsat satellite system and the basics of computer image processing. Topics such as destriping, contrast stretches, atmospheric corrections, ratioing, and classification techniques are addressed. Summaries of the STANSORT II and ELAS software packages and the Technology Application Center's Digital Image Processing System (TDIPS) are presented

  12. A theory-informed, process-oriented Resident Scholarship Program

    Science.gov (United States)

    Thammasitboon, Satid; Darby, John B.; Hair, Amy B.; Rose, Karen M.; Ward, Mark A.; Turner, Teri L.; Balmer, Dorene F.

    2016-01-01

    Background The Accreditation Council for Graduate Medical Education requires residency programs to provide curricula for residents to engage in scholarly activities but does not specify particular guidelines for instruction. We propose a Resident Scholarship Program that is framed by the self-determination theory (SDT) and emphasize the process of scholarly activity versus a scholarly product. Methods The authors report on their longitudinal Resident Scholarship Program, which aimed to support psychological needs central to SDT: autonomy, competence, and relatedness. By addressing those needs in program aims and program components, the program may foster residents’ intrinsic motivation to learn and to engage in scholarly activity. To this end, residents’ engagement in scholarly processes, and changes in perceived autonomy, competence, and relatedness were assessed. Results Residents engaged in a range of scholarly projects and expressed positive regard for the program. Compared to before residency, residents felt more confident in the process of scholarly activity, as determined by changes in increased perceived autonomy, competence, and relatedness. Scholarly products were accomplished in return for a focus on scholarly process. Conclusions Based on our experience, and in line with the SDT, supporting residents’ autonomy, competence, and relatedness through a process-oriented scholarship program may foster the curiosity, inquisitiveness, and internal motivation to learn that drives scholarly activity and ultimately the production of scholarly products. PMID:27306995

  13. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  14. Cellular Neural Network for Real Time Image Processing

    International Nuclear Information System (INIS)

    Vagliasindi, G.; Arena, P.; Fortuna, L.; Mazzitelli, G.; Murari, A.

    2008-01-01

    Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)

  15. Mapping spatial patterns with morphological image processing

    Science.gov (United States)

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham

    2006-01-01

    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  16. Image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Manninen, H.; Partanen, K.; Lehtovirta, J.; Matsi, P.; Soimakallio, S.

    1992-01-01

    The usefulness of digital image processing of chest radiographs was evaluated in a clinical study. In 54 patients, chest radiographs in the posteroanterior projection were obtained by both 14 inch digital image intensifier equipment and the conventional screen-film technique. The digital radiographs (512x512 image format) viewed on a 625 line monitor were processed in 3 different ways: 1.standard display; 2.digital edge enhancement for the standard display; 3.inverse intensity display. The radiographs were interpreted independently by 3 radiologists. Diagnoses were confirmed by CT, follow-up radiographs and clinical records. Chest abnormalities of the films analyzed included 21 primary lung tumors, 44 pulmonary nodules, 16 cases with mediastinal disease, 17 with pneumonia /atelectasis. Interstitial lung disease, pleural plaques, and pulmonary emphysema were found in 30, 18 and 19 cases respectively. Sensitivity of conventional radiography when averaged overall findings was better than that of digital techniques (P<0.001). Differences in diagnostic accuracy measured by sensitivity and specificity between the 3 digital display modes were small. Standard image display showed better sensitivity for pulmonary nodules (0.74 vs 0.66; P<0.05) but poorer specificity for pulmonary emphysema (0.85 vs 0.93; P<0.05) compared with inverse intensity display. It is concluded that when using 512x512 image format, the routine use of digital edge enhancement and tone reversal at digital chest radiographs is not warranted. (author). 12 refs.; 4 figs.; 2 tabs

  17. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  18. Lean Six Sigma applied to a process innovation in a mexican health institute's imaging department.

    Science.gov (United States)

    Garcia-Porres, J; Ortiz-Posadas, M R; Pimentel-Aguilar, A B

    2008-01-01

    Delivery of services to a patient has to be given with an acceptable measure of quality that can be monitored through the patient's satisfaction. The objective of this work was to innovate processes eliminating waste and non value-added work in processes done at the Imaging Department in the National Institute of Respiratory Diseases (INER for its Spanish acronym) in Mexico City, to decrease the time a patient spends in a study and increase satisfaction. This innovation will be done using Lean Six Sigma tools and applied in a pilot program.

  19. Image processing can cause some malignant soft-tissue lesions to be missed in digital mammography images.

    Science.gov (United States)

    Warren, L M; Halling-Brown, M D; Looney, P T; Dance, D R; Wallis, M G; Given-Wilson, R M; Wilkinson, L; McAvinchey, R; Young, K C

    2017-09-01

    To investigate the effect of image processing on cancer detection in mammography. An observer study was performed using 349 digital mammography images of women with normal breasts, calcification clusters, or soft-tissue lesions including 191 subtle cancers. Images underwent two types of processing: FlavourA (standard) and FlavourB (added enhancement). Six observers located features in the breast they suspected to be cancerous (4,188 observations). Data were analysed using jackknife alternative free-response receiver operating characteristic (JAFROC) analysis. Characteristics of the cancers detected with each image processing type were investigated. For calcifications, the JAFROC figure of merit (FOM) was equal to 0.86 for both types of image processing. For soft-tissue lesions, the JAFROC FOM were better for FlavourA (0.81) than FlavourB (0.78); this difference was significant (p=0.001). Using FlavourA a greater number of cancers of all grades and sizes were detected than with FlavourB. FlavourA improved soft-tissue lesion detection in denser breasts (p=0.04 when volumetric density was over 7.5%) CONCLUSIONS: The detection of malignant soft-tissue lesions (which were primarily invasive) was significantly better with FlavourA than FlavourB image processing. This is despite FlavourB having a higher contrast appearance often preferred by radiologists. It is important that clinical choice of image processing is based on objective measures. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  20. An image processing approach to analyze morphological features of microscopic images of muscle fibers.

    Science.gov (United States)

    Comin, Cesar Henrique; Xu, Xiaoyin; Wang, Yaming; Costa, Luciano da Fontoura; Yang, Zhong

    2014-12-01

    We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Image processing with a cellular nonlinear network

    International Nuclear Information System (INIS)

    Morfu, S.

    2005-01-01

    A cellular nonlinear network (CNN) based on uncoupled nonlinear oscillators is proposed for image processing purposes. It is shown theoretically and numerically that the contrast of an image loaded at the nodes of the CNN is strongly enhanced, even if this one is initially weak. An image inversion can be also obtained without reconfiguration of the network whereas a gray levels extraction can be performed with an additional threshold filtering. Lastly, an electronic implementation of this CNN is presented

  2. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  3. Reducing the absorbed dose in analogue radiography of infant chest images by improving the image quality, using image processing techniques

    International Nuclear Information System (INIS)

    Karimian, A.; Yazdani, S.; Askari, M. A.

    2011-01-01

    Radiographic inspection is one of the most widely employed techniques for medical testing methods. Because of poor contrast and high un-sharpness of radiographic image quality in films, converting radiographs to a digital format and using further digital image processing is the best method of enhancing the image quality and assisting the interpreter in their evaluation. In this research work, radiographic films of 70 infant chest images with different sizes of defects were selected. To digitise the chest images and employ image processing the two algorithms (i) spatial domain and (ii) frequency domain techniques were used. The MATLAB environment was selected for processing in the digital format. Our results showed that by using these two techniques, the defects with small dimensions are detectable. Therefore, these suggested techniques may help medical specialists to diagnose the defects in the primary stages and help to prevent more repeat X-ray examination of paediatric patients. (authors)

  4. The development of application technology for image processing in nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Bum; Kim, Woog Ki; Sohn, Surg Won; Kim, Seung Ho; Hwang, Suk Yeoung; Kim, Byung Soo

    1991-01-01

    The object of this project is to develop application technology of image processing in nuclear facilities where image signal are used for reliability and safety enhancement of operation, radiation exposure reduce of operator, and automation of operation processing. We has studied such application technology for image processing in nuclear facilities as non-tactile measurement, remote and automatic inspection, remote control, and enhanced analysis of visual information. On these bases, automation system and real-time image processing system are developed. Nuclear power consists in over 50% share of electic power supply of our country nowdays. So, it is required of technological support for top-notch technology in nuclear industry and its related fields. Especially, it is indispensable for image processing technology to enhance the reliabilty and safety of operation, to automate the process in a place like a nuclear power plant and radioactive envionment. It is important that image processing technology is linked to a nuclear engineering, and enhance the reliability abd safety of nuclear operation, as well as decrease the dose rate. (Author)

  5. Digital Data Processing of Images | Lotter | South African Medical ...

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  6. Analysis of the Growth Process of Neural Cells in Culture Environment Using Image Processing Techniques

    Science.gov (United States)

    Mirsafianf, Atefeh S.; Isfahani, Shirin N.; Kasaei, Shohreh; Mobasheri, Hamid

    Here we present an approach for processing neural cells images to analyze their growth process in culture environment. We have applied several image processing techniques for: 1- Environmental noise reduction, 2- Neural cells segmentation, 3- Neural cells classification based on their dendrites' growth conditions, and 4- neurons' features Extraction and measurement (e.g., like cell body area, number of dendrites, axon's length, and so on). Due to the large amount of noise in the images, we have used feed forward artificial neural networks to detect edges more precisely.

  7. IDP: Image and data processing (software) in C++

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  8. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  9. STEM image simulation with hybrid CPU/GPU programming

    International Nuclear Information System (INIS)

    Yao, Y.; Ge, B.H.; Shen, X.; Wang, Y.G.; Yu, R.C.

    2016-01-01

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. - Highlights: • STEM image simulation is achieved by hybrid CPU/GPU programming under parallel algorithm architecture to speed up the calculation in the personal computer (PC). • In order to fully utilize the calculation power of the PC, the simulation is performed by GPU core and multi-CPU cores at the same time so efficiency is improved significantly. • GaSb and artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. The results reveal some unintuitive phenomena about the contrast variation with the atom numbers.

  10. STEM image simulation with hybrid CPU/GPU programming

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Y., E-mail: yaoyuan@iphy.ac.cn; Ge, B.H.; Shen, X.; Wang, Y.G.; Yu, R.C.

    2016-07-15

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. - Highlights: • STEM image simulation is achieved by hybrid CPU/GPU programming under parallel algorithm architecture to speed up the calculation in the personal computer (PC). • In order to fully utilize the calculation power of the PC, the simulation is performed by GPU core and multi-CPU cores at the same time so efficiency is improved significantly. • GaSb and artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. The results reveal some unintuitive phenomena about the contrast variation with the atom numbers.

  11. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  12. Digital image processing in art conservation

    Czech Academy of Sciences Publication Activity Database

    Zitová, Barbara; Flusser, Jan

    č. 53 (2003), s. 44-45 ISSN 0926-4981 Institutional research plan: CEZ:AV0Z1075907 Keywords : art conservation * digital image processing * change detection Subject RIV: JD - Computer Applications, Robotics

  13. Imaging partons in exclusive scattering processes

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus

    2012-06-15

    The spatial distribution of partons in the proton can be probed in suitable exclusive scattering processes. I report on recent performance estimates for parton imaging at a proposed Electron-Ion Collider.

  14. Computational analysis of Pelton bucket tip erosion using digital image processing

    Science.gov (United States)

    Shrestha, Bim Prasad; Gautam, Bijaya; Bajracharya, Tri Ratna

    2008-03-01

    Erosion of hydro turbine components through sand laden river is one of the biggest problems in Himalayas. Even with sediment trapping systems, complete removal of fine sediment from water is impossible and uneconomical; hence most of the turbine components in Himalayan Rivers are exposed to sand laden water and subject to erode. Pelton bucket which are being wildly used in different hydropower generation plant undergoes erosion on the continuous presence of sand particles in water. The subsequent erosion causes increase in splitter thickness, which is supposed to be theoretically zero. This increase in splitter thickness gives rise to back hitting of water followed by decrease in turbine efficiency. This paper describes the process of measurement of sharp edges like bucket tip using digital image processing. Image of each bucket is captured and allowed to run for 72 hours; sand concentration in water hitting the bucket is closely controlled and monitored. Later, the image of the test bucket is taken in the same condition. The process is repeated for 10 times. In this paper digital image processing which encompasses processes that performs image enhancement in both spatial and frequency domain. In addition, the processes that extract attributes from images, up to and including the measurement of splitter's tip. Processing of image has been done in MATLAB 6.5 platform. The result shows that quantitative measurement of edge erosion of sharp edges could accurately be detected and the erosion profile could be generated using image processing technique.

  15. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  16. Panorama Image Processing for Condition Monitoring with Thermography in Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Byoung Joon; Kim, Tae Hwan; Kim, Soon Geol; Mo, Yoon Syub [UNETWARE, Seoul (Korea, Republic of); Kim, Won Tae [Kongju National University, Gongju (Korea, Republic of)

    2010-04-15

    In this paper, imaging processing study obtained from CCD image and thermography image was performed in order to treat easily thermographic data without any risks of personnel who conduct the condition monitoring for the abnormal or failure status occurrable in industrial power plants. This imaging processing is also applicable to the predictive maintenance. For confirming the broad monitoring, a methodology producting single image from the panorama technique was developed no matter how many cameras are employed, including fusion method for discrete configuration for the target. As results, image fusion from quick realtime processing was obtained and it was possible to save time to track the location monitoring in matching the images between CCTV and thermography

  17. Panorama Image Processing for Condition Monitoring with Thermography in Power Plant

    International Nuclear Information System (INIS)

    Jeon, Byoung Joon; Kim, Tae Hwan; Kim, Soon Geol; Mo, Yoon Syub; Kim, Won Tae

    2010-01-01

    In this paper, imaging processing study obtained from CCD image and thermography image was performed in order to treat easily thermographic data without any risks of personnel who conduct the condition monitoring for the abnormal or failure status occurrable in industrial power plants. This imaging processing is also applicable to the predictive maintenance. For confirming the broad monitoring, a methodology producting single image from the panorama technique was developed no matter how many cameras are employed, including fusion method for discrete configuration for the target. As results, image fusion from quick realtime processing was obtained and it was possible to save time to track the location monitoring in matching the images between CCTV and thermography

  18. Image recognition on raw and processed potato detection: a review

    Science.gov (United States)

    Qi, Yan-nan; Lü, Cheng-xu; Zhang, Jun-ning; Li, Ya-shuo; Zeng, Zhen; Mao, Wen-hua; Jiang, Han-lu; Yang, Bing-nan

    2018-02-01

    Objective: Chinese potato staple food strategy clearly pointed out the need to improve potato processing, while the bottleneck of this strategy is technology and equipment of selection of appropriate raw and processed potato. The purpose of this paper is to summarize the advanced raw and processed potato detection methods. Method: According to consult research literatures in the field of image recognition based potato quality detection, including the shape, weight, mechanical damage, germination, greening, black heart, scab potato etc., the development and direction of this field were summarized in this paper. Result: In order to obtain whole potato surface information, the hardware was built by the synchronous of image sensor and conveyor belt to achieve multi-angle images of a single potato. Researches on image recognition of potato shape are popular and mature, including qualitative discrimination on abnormal and sound potato, and even round and oval potato, with the recognition accuracy of more than 83%. Weight is an important indicator for potato grading, and the image classification accuracy presents more than 93%. The image recognition of potato mechanical damage focuses on qualitative identification, with the main affecting factors of damage shape and damage time. The image recognition of potato germination usually uses potato surface image and edge germination point. Both of the qualitative and quantitative detection of green potato have been researched, currently scab and blackheart image recognition need to be operated using the stable detection environment or specific device. The image recognition of processed potato mainly focuses on potato chips, slices and fries, etc. Conclusion: image recognition as a food rapid detection tool have been widely researched on the area of raw and processed potato quality analyses, its technique and equipment have the potential for commercialization in short term, to meet to the strategy demand of development potato as

  19. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  20. Grid Computing Application for Brain Magnetic Resonance Image Processing

    International Nuclear Information System (INIS)

    Valdivia, F; Crépeault, B; Duchesne, S

    2012-01-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  1. Should body image programs be inclusive? A focus group study of college students.

    Science.gov (United States)

    Ciao, Anna C; Ohls, Olivia C; Pringle, Kevin D

    2018-01-01

    Most evidence-based body image programs for college students (e.g., the Body Project) are designed for female-only audiences, although body dissatisfaction is not limited to female-identified individuals. Furthermore, programs do not explicitly discuss diversity, although individuals with marginalized gender, racial, and sexual identities may be particularly vulnerable to body image disturbances. Making programs more inclusive may increase their disseminability. This qualitative study examined the feasibility of adapting the Body Project for universal and inclusive use with college students. Participants (N = 36; M age = 21.66 years; 73% female-identified; 20% sexual minority; 23% racial minority) attended one of five semi-structured focus groups to explore the inclusivity of appearance-based cultural norms using adapted Body Project activities and discuss the feasibility of universal and inclusive interventions. Inductive qualitative content analysis with three-rater consensus identified focus group themes. There was consensus that inclusive interventions could have a positive impact (broadening perspectives, normalizing body image concerns, increasing awareness) despite potential barriers (poor diversity representation, vulnerability). There was strong consensus regarding advice for facilitating inclusive interventions (e.g., skilled facilitation, education, increasing diversity). Results suggest that inclusive body image programs are desirable and provide a framework for creating the EVERYbody Project, a program for more universal audiences. © 2017 Wiley Periodicals, Inc.

  2. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  3. Image processing using pulse-coupled neural networks applications in Python

    CERN Document Server

    Lindblad, Thomas

    2013-01-01

    Image processing algorithms based on the mammalian visual cortex are powerful tools for extraction information and manipulating images. This book reviews the neural theory and translates them into digital models. Applications are given in areas of image recognition, foveation, image fusion and information extraction. The third edition reflects renewed international interest in pulse image processing with updated sections presenting several newly developed applications. This edition also introduces a suite of Python scripts that assist readers in replicating results presented in the text and to further develop their own applications.

  4. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Science.gov (United States)

    Della Mea, Vincenzo; Baroni, Giulia L; Pilutti, David; Di Loreto, Carla

    2017-01-01

    The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  5. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design

    Science.gov (United States)

    Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-01-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative

  6. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  7. Image processing techniques for thermal, x-rays and nuclear radiations

    International Nuclear Information System (INIS)

    Chadda, V.K.

    1998-01-01

    The paper describes image acquisition techniques for the non-visible range of electromagnetic spectrum especially thermal, x-rays and nuclear radiations. Thermal imaging systems are valuable tools used for applications ranging from PCB inspection, hot spot studies, fire identification, satellite imaging to defense applications. Penetrating radiations like x-rays and gamma rays are used in NDT, baggage inspection, CAT scan, cardiology, radiography, nuclear medicine etc. Neutron radiography compliments conventional x-rays and gamma radiography. For these applications, image processing and computed tomography are employed for 2-D and 3-D image interpretation respectively. The paper also covers main features of image processing systems for quantitative evaluation of gray level and binary images. (author)

  8. A quality assurance program for the on-board imager[reg

    International Nuclear Information System (INIS)

    Yoo, Sua; Kim, Gwe-Ya; Hammoud, Rabih

    2006-01-01

    To develop a quality assurance (QA) program for the On-Board Imager (OBI) system and to summarize the results of these QA tests over extended periods from multiple institutions. Both the radiographic and cone-beam computed tomography (CBCT) mode of operation have been evaluated. The QA programs from four institutions have been combined to generate a series of tests for evaluating the performance of the On-Board Imager. The combined QA program consists of three parts: (1) safety and functionality (2) geometry, and (3) image quality. Safety and functionality tests evaluate the functionality of safety features and the clinical operation of the entire system during the tube warm-up. Geometry QA verifies the geometric accuracy and stability of the OBI/CBCT hardware/software. Image quality QA monitors spatial resolution and contrast sensitivity of the radiographic images. Image quality QA for CBCT includes tests for Hounsfield Unit (HU) linearity, HU uniformity, spatial linearity, and scan slice geometry, in addition. All safety and functionality tests passed on a daily basis. The average accuracy of the OBI isocenter was better than 1.5 mm with a range of variation of less than 1 mm over 8 months. The average accuracy of arm positions in the mechanical geometry QA was better than 1 mm, with a range of variation of less than 1 mm over 8 months. Measurements of other geometry QA tests showed stable results within tolerance throughout the test periods. Radiographic contrast sensitivity ranged between 2.2% and 3.2% and spatial resolution ranged between 1.25 and 1.6 lp/mm. Over four months the CBCT images showed stable spatial linearity, scan slice geometry, contrast resolution (1%; 6 lp/cm). The HU linearity was within ±40 HU for all measurements. By combining test methods from multiple institutions, we have developed a comprehensive, yet practical, set of QA tests for the OBI system. Use of the tests over extended periods show that the OBI system has reliable mechanical

  9. Digital-image processing improves man-machine communication at a nuclear reactor

    International Nuclear Information System (INIS)

    Cook, S.A.; Harrington, T.P.; Toffer, H.

    1982-01-01

    The application of digital image processing to improve man-machine communication in a nuclear reactor control room is illustrated. At the Hanford N Reactor, operated by UNC Nuclear Industries for the United States Department of Energy, in Richland, Washington, digital image processing is applied to flow, temperature, and tube power data. Color displays are used to present the data in a clear and concise fashion. Specific examples are used to demonstrate the capabilities and benefits of digital image processing of reactor data. N Reactor flow and power maps for routine reactor operations and for perturbed reactor conditions are displayed. The advantages of difference mapping are demonstrated. Image processing techniques have also been applied to results of analytical reactor models; two examples are shown. The potential of combining experimental and analytical information with digital image processing to produce predictive and adaptive reactor core models is discussed. The applications demonstrate that digital image processing can provide new more effective ways for control room personnel to assess reactor status, to locate problems and explore corrective actions. 10 figures

  10. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  11. Process-aware web programming with Jolie

    DEFF Research Database (Denmark)

    Montesi, F.

    2016-01-01

    We extend the Jolie programming language to capture the native modelling of process-aware web information systems, i.e., web information systems based upon the execution of business processes. Our main contribution is to offer a unifying approach for the programming of distributed architectures...... on the web, which can capture web servers, stateful process execution, and the composition of services via mediation. We discuss applications of this approach through a series of examples that cover, e.g., static content serving, multiparty sessions, and the evolution of web systems. Finally, we present...... a performance evaluation that includes a comparison of Jolie-based web systems to other frameworks and a measurement of its scalability. © 2016 Elsevier B.V....

  12. Brain's tumor image processing using shearlet transform

    Science.gov (United States)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander

    2017-09-01

    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  13. Theoretical analysis of radiographic images by nonstationary Poisson processes

    International Nuclear Information System (INIS)

    Tanaka, Kazuo; Uchida, Suguru; Yamada, Isao.

    1980-01-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process. (author)

  14. Detection and Extraction of Roads from High Resolution Satellites Images with Dynamic Programming

    Science.gov (United States)

    Benzouai, Siham; Smara, Youcef

    2010-12-01

    The advent of satellite images allows now a regular and a fast digitizing and update of geographic data, especially roads which are very useful for Geographic Information Systems (GIS) applications such as transportation, urban pollution, geomarketing, etc. For this, several studies have been conducted to automate roads extraction in order to minimize the manual processes [4]. In this work, we are interested in roads extraction from satellite imagery with high spatial resolution (at best equal to 10 m). The method is semi automatic and follows a linear approach where road is considered as a linear object. As roads extraction is a pattern recognition problem, it is useful, above all, to characterize roads. After, we realize a pre-processing by applying an Infinite Size Edge Filter -ISEF- and processing method based on dynamic programming concept, in particular, Fishler algorithm designed by F*.

  15. The vision guidance and image processing of AGV

    Science.gov (United States)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  16. Analysis of irradiated U-7wt%Mo dispersion fuel microstructures using automated image processing

    Energy Technology Data Exchange (ETDEWEB)

    Collette, R. [Colorado School of Mines, Nuclear Science and Engineering Program, 1500 Illinois St, Golden, CO 80401 (United States); King, J., E-mail: kingjc@mines.edu [Colorado School of Mines, Nuclear Science and Engineering Program, 1500 Illinois St, Golden, CO 80401 (United States); Buesch, C. [Oregon State University, 1500 SW Jefferson St., Corvallis, OR 97331 (United States); Keiser, D.D.; Williams, W.; Miller, B.D.; Schulthess, J. [Nuclear Fuels and Materials Division, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-6188 (United States)

    2016-07-15

    The High Performance Research Reactor Fuel Development (HPPRFD) program is responsible for developing low enriched uranium (LEU) fuel substitutes for high performance reactors fueled with highly enriched uranium (HEU) that have not yet been converted to LEU. The uranium-molybdenum (U-Mo) fuel system was selected for this effort. In this study, fission gas pore segmentation was performed on U-7wt%Mo dispersion fuel samples at three separate fission densities using an automated image processing interface developed in MATLAB. Pore size distributions were attained that showed both expected and unexpected fission gas behavior. In general, it proved challenging to identify any dominant trends when comparing fission bubble data across samples from different fuel plates due to varying compositions and fabrication techniques. The results exhibited fair agreement with the fission density vs. porosity correlation developed by the Russian reactor conversion program. - Highlights: • Automated image processing is used to extract fission gas bubble data from irradiated U−Mo fuel samples. • Verification and validation tests are performed to ensure the algorithm's accuracy. • Fission bubble parameters are predictably difficult to compare across samples of varying compositions. • The 2-D results suggest the need for more homogenized fuel sampling in future studies. • The results also demonstrate the value of 3-D reconstruction techniques.

  17. Interactive data-processing system for metallurgy

    Science.gov (United States)

    Rathz, T. J.

    1978-01-01

    Equipment indicates that system can rapidly and accurately process metallurgical and materials-processing data for wide range of applications. Advantages include increase in contract between areas on image, ability to analyze images via operator-written programs, and space available for storing images.

  18. Image Post-Processing and Analysis. Chapter 17

    Energy Technology Data Exchange (ETDEWEB)

    Yushkevich, P. A. [University of Pennsylvania, Philadelphia (United States)

    2014-09-15

    For decades, scientists have used computers to enhance and analyse medical images. At first, they developed simple computer algorithms to enhance the appearance of interesting features in images, helping humans read and interpret them better. Later, they created more advanced algorithms, where the computer would not only enhance images but also participate in facilitating understanding of their content. Segmentation algorithms were developed to detect and extract specific anatomical objects in images, such as malignant lesions in mammograms. Registration algorithms were developed to align images of different modalities and to find corresponding anatomical locations in images from different subjects. These algorithms have made computer aided detection and diagnosis, computer guided surgery and other highly complex medical technologies possible. Nowadays, the field of image processing and analysis is a complex branch of science that lies at the intersection of applied mathematics, computer science, physics, statistics and biomedical sciences. This chapter will give a general overview of the most common problems in this field and the algorithms that address them.

  19. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  20. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    Science.gov (United States)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory

  1. Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms

    Science.gov (United States)

    Negro Maggio, Valentina; Iocchi, Luca

    2015-02-01

    Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.

  2. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research.

    Science.gov (United States)

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris

    2017-06-01

    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at http://medisp.bme.teiath.gr/hicl/ . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.

  3. Digital image processing of mandibular trabeculae on radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Ogino, Toshi

    1987-06-01

    The present study was aimed to reveal the texture patterns of the radiographs of the mandibular trabeculae by digital image processing. The 32 cases of normal subjects and the 13 cases of patients with mandibular diseases of ameloblastoma, primordial cysts, squamous cell carcinoma and odontoma were analyzed by their intra-oral radiographs in the right premolar regions. The radiograms were digitized by the use of a drum scanner densitometry method. The input radiographic images were processed by a histogram equalization method. The result are as follows : First, the histogram equalization method enhances the image contrast of the textures. Second, the output images of the textures for normal mandible-trabeculae radiograms are of network pattern in nature. Third, the output images for the patients are characterized by the non-network pattern and replaced by the patterns of the fabric texture, intertwined plants (karakusa-pattern), scattered small masses and amorphous texture. Thus, these results indicates that the present digital image system is expected to be useful for revealing the texture patterns of the radiographs and in the future for the texture analysis of the clinical radiographs to obtain quantitative diagnostic findings.

  4. Application of digital image processing to industrial radiography

    International Nuclear Information System (INIS)

    Bodson; Varcin; Crescenzo; Theulot

    1985-01-01

    Radiography is widely used for quality control of fabrication of large reactor components. Image processing methods are applied to industrial radiographs in order to help to take a decision as well as to reduce costs and delays for examination. Films, performed in representative operating conditions, are used to test results obtained with algorithms for the restauration of images and for the detection, characterisation of indications in order to determine the possibility of an automatic radiographs processing [fr

  5. Digital image processing for real-time neutron radiography and its applications

    International Nuclear Information System (INIS)

    Fujine, Shigenori

    1989-01-01

    The present paper describes several digital image processing approaches for the real-time neutron radiography (neutron television-NTV), such as image integration, adaptive smoothing and image enhancement, which have beneficial effects on image improvements, and also describes how to use these techniques for applications. Details invisible in direct images of NTV are able to be revealed by digital image processing, such as reversed image, gray level correction, gray scale transformation, contoured image, subtraction technique, pseudo color display and so on. For real-time application a contouring operation and an averaging approach can also be utilized effectively. (author)

  6. [Digital thoracic radiology: devices, image processing, limits].

    Science.gov (United States)

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  7. Color Processing using Max-trees : A Comparison on Image Compression

    NARCIS (Netherlands)

    Tushabe, Florence; Wilkinson, M.H.F.

    2012-01-01

    This paper proposes a new method of processing color images using mathematical morphology techniques. It adapts the Max-tree image representation to accommodate color and other vectorial images. The proposed method introduces three new ways of transforming the color image into a gray scale image

  8. Mathematical problems in image processing

    International Nuclear Information System (INIS)

    Chidume, C.E.

    2000-01-01

    This is the second volume of a new series of lecture notes of the Abdus Salam International Centre for Theoretical Physics. This volume contains the lecture notes given by A. Chambolle during the School on Mathematical Problems in Image Processing. The school consisted of two weeks of lecture courses and one week of conference

  9. Signal and image processing for monitoring and testing at EDF

    International Nuclear Information System (INIS)

    Georgel, B.; Garreau, D.

    1992-04-01

    The quality of monitoring and non destructive testing devices in plants and utilities today greatly depends on the efficient processing of signal and image data. In this context, signal or image processing techniques, such as adaptive filtering or detection or 3D reconstruction, are required whenever manufacturing nonconformances or faulty operation have to be recognized and identified. This paper reviews the issues of industrial image and signal processing, by briefly considering the relevant studies and projects under way at EDF. (authors). 1 fig., 11 refs

  10. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  11. Functional imaging of the pancreas. Image processing techniques and clinical evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Fumiko

    1984-02-01

    An image processing technique for functional imaging of the pancreas was developed and is here reported. In this paper, clinical efficacy of the technique for detecting pancreatic abnormality is evaluated in comparison with conventional pancreatic scintigraphy and CT. For quantitative evaluation, functional rate, i.e. the rate of normal functioning pancreatic area, was calculated from the functional image and subtraction image. Two hundred and ninety-five cases were studied using this technique. Conventional image had a sensitivity of 65% and a specificity of 78%, while the use of functional imaging improved sensitivity to 88% and specificity to 88%. The mean functional rate in patients with pancreatic disease was significantly lower (33.3 +- 24.5 in patients with chronic pancreatitis, 28.1 +- 26.9 in patients with acute pancreatitis, 43.4 +- 22.3 in patients with diabetes mellitus, 20.4 +- 23.4 in patients with pancreatic cancer) than the mean functional rate in cases without pancreatic disease (86.4 +- 14.2). It is suggested that functional image of the pancreas reflecting pancreatic exocrine function and functional rate is a useful indicator of pancreatic exocrine function.

  12. MIA - a free and open source software for gray scale medical image analysis

    OpenAIRE

    Wöllny, Gert; Kellman, Peter; Ledesma Carbayo, María Jesús; Skinner, Matthew M.; Hublin, Jean-Jaques; Hierl, Thomas

    2013-01-01

    Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also a...

  13. Some computer applications and digital image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Lowinger, T.

    1981-01-01

    Methods of digital image processing are applied to problems in nuclear medicine imaging. The symmetry properties of central nervous system lesions are exploited in an attempt to determine the three-dimensional radioisotope density distribution within the lesions. An algorithm developed by astronomers at the end of the 19th century to determine the distribution of matter in globular clusters is applied to tumors. This algorithm permits the emission-computed-tomographic reconstruction of spherical lesions from a single view. The three-dimensional radioisotope distribution derived by the application of the algorithm can be used to characterize the lesions. The applicability to nuclear medicine images of ten edge detection methods in general usage in digital image processing were evaluated. A general model of image formation by scintillation cameras is developed. The model assumes that objects to be imaged are composed of a finite set of points. The validity of the model has been verified by its ability to duplicate experimental results. Practical applications of this work involve quantitative assessment of the distribution of radipharmaceuticals under clinical situations and the study of image processing algorithms

  14. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    Science.gov (United States)

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  15. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  16. Document Examination: Applications of Image Processing Systems.

    Science.gov (United States)

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  17. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  18. Methods for processing and analysis functional and anatomical brain images: computerized tomography, emission tomography and nuclear resonance imaging

    International Nuclear Information System (INIS)

    Mazoyer, B.M.

    1988-01-01

    The various methods for brain image processing and analysis are presented and compared. The following topics are developed: the physical basis of brain image comparison (nature and formation of signals intrinsic performance of the methods image characteristics); mathematical methods for image processing and analysis (filtering, functional parameter extraction, morphological analysis, robotics and artificial intelligence); methods for anatomical localization (neuro-anatomy atlas, proportional stereotaxic atlas, numerized atlas); methodology of cerebral image superposition (normalization, retiming); image networks [fr

  19. Some applications of nonlinear diffusion to processing of dynamic evolution images

    International Nuclear Information System (INIS)

    Goltsov, Alexey N.; Nikishov, Sergey A.

    1997-01-01

    Model nonlinear diffusion equation with the most simple Landau-Ginzburg free energy functional was applied to locate boundaries between meaningful regions of low-level images. The method is oriented to processing images of objects that are a result of dynamic evolution: images of different organs and tissues obtained by radiography and NMR methods, electron microscope images of morphogenesis fields, etc. In the methods developed by us, parameters of the nonlinear diffusion model are chosen on the basis of the preliminary treatment of the images. The parameters of the Landau-Ginzburg free energy functional are extracted from the structure factor of the images. Owing to such a choice of the model parameters, the image to be processed is located in the vicinity of the steady-state of the diffusion equation. The suggested method allows one to separate distinct structures having specific space characteristics from the whole image. The method was applied to processing X-ray images of the lung

  20. Improved cancer diagnostics by different image processing techniques on OCT images

    Science.gov (United States)

    Kanawade, Rajesh; Lengenfelder, Benjamin; Marini Menezes, Tassiana; Hohmann, Martin; Kopfinger, Stefan; Hohmann, Tim; Grabiec, Urszula; Klämpfl, Florian; Gonzales Menezes, Jean; Waldner, Maximilian; Schmidt, Michael

    2015-07-01

    Optical-coherence tomography (OCT) is a promising non-invasive, high-resolution imaging modality which can be used for cancer diagnosis and its therapeutic assessment. However, speckle noise makes detection of cancer boundaries and image segmentation problematic and unreliable. Therefore, to improve the image analysis for a precise cancer border detection, the performance of different image processing algorithms such as mean, median, hybrid median filter and rotational kernel transformation (RKT) for this task is investigated. This is done on OCT images acquired from an ex-vivo human cancerous mucosa and in vitro by using cultivated tumour applied on organotypical hippocampal slice cultures. The preliminary results confirm that the border between the healthy and the cancer lesions can be identified precisely. The obtained results are verified with fluorescence microscopy. This research can improve cancer diagnosis and the detection of borders between healthy and cancerous tissue. Thus, it could also reduce the number of biopsies required during screening endoscopy by providing better guidance to the physician.

  1. Smartphones as image processing systems for prosthetic vision.

    Science.gov (United States)

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  2. A Robust Photogrammetric Processing Method of Low-Altitude UAV Images

    Directory of Open Access Journals (Sweden)

    Mingyao Ai

    2015-02-01

    Full Text Available Low-altitude Unmanned Aerial Vehicles (UAV images which include distortion, illumination variance, and large rotation angles are facing multiple challenges of image orientation and image processing. In this paper, a robust and convenient photogrammetric approach is proposed for processing low-altitude UAV images, involving a strip management method to automatically build a standardized regional aerial triangle (AT network, a parallel inner orientation algorithm, a ground control points (GCPs predicting method, and an improved Scale Invariant Feature Transform (SIFT method to produce large number of evenly distributed reliable tie points for bundle adjustment (BA. A multi-view matching approach is improved to produce Digital Surface Models (DSM and Digital Orthophoto Maps (DOM for 3D visualization. Experimental results show that the proposed approach is robust and feasible for photogrammetric processing of low-altitude UAV images and 3D visualization of products.

  3. Linear Algebra and Image Processing

    Science.gov (United States)

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  4. Dehydration process of fish analyzed by neutron beam imaging

    International Nuclear Information System (INIS)

    Tanoi, K.; Hamada, Y.; Seyama, S.; Saito, T.; Iikura, H.; Nakanishi, T.M.

    2009-01-01

    Since regulation of water content of the dried fish is an important factor for the quality of the fish, water-losing process during drying (squid and Japanese horse mackerel) was analyzed through neutron beam imaging. The neutron image showed that around the shoulder of mackerel, there was a part where water content was liable to maintain high during drying. To analyze water-losing process more in detail, spatial image was produced. From the images, it was clearly indicated that the decrease of water content was regulated around the shoulder part. It was suggested that to prevent deterioration around the shoulder part of the dried fish is an important factor to keep quality of the dried fish in the storage.

  5. Etching and image analysis of the microstructure in marble

    DEFF Research Database (Denmark)

    Alm, Ditte; Brix, Susanne; Howe-Rasmussen, Helle

    2005-01-01

    of grains exposed on that surface are measured on the microscope images using image analysis by the program Adobe Photoshop 7.0 with Image Processing Toolkit 4.0. The parameters measured by the program on microscope images of thin sections of two marble types are used for calculation of the coefficient...

  6. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    Science.gov (United States)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  7. Surface regions of illusory images are detected with a slower processing speed than those of luminance-defined images.

    Science.gov (United States)

    Mihaylova, Milena; Manahilov, Velitchko

    2010-11-24

    Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.

  8. Optimized image processing with modified preprocessing of image data sets of a transparent imaging plate by way of the lateral view of the cervical spine

    International Nuclear Information System (INIS)

    Reissberg, S.; Hoeschen, C.; Redlich, U.; Scherlach, C.; Preuss, H.; Kaestner, A.; Doehring, W.; Woischneck, D.; Schuetze, M.; Reichardt, K.; Firsching, R.

    2002-01-01

    Purpose: To improve the diagnostic quality of lateral radiographs of the cervical spine by pre-processing the image data sets produced by a transparent imaging plate with both-side reading and to evaluate any possible impact on minimizing the number of additional radiographs and supplementary investigations. Material and Methods: One hundred lateral digital radiographs of the cervical spine were processed with two different methods: processing of each data set using the system-imminent parameters and using the manual model. The difference between the two types of processing is the level of the latitude value. Hard copies of the processed images were judged by five radiologists and three neurosurgeons. The evaluation applied the image criteria score (ICS) without conventional reference images. Results: In 99% of the lateral radiographs of the cervical spine, all vertebral bodies could be completed delineated using the manual mode, but only 76% of the images processed by the system-imminent parameters showed all vertebral bodies. Thus, the manual mode enabled the evaluation of up to two additional more caudal vertebral bodies. The manual mode processing was significantly better concerning object size and processing artifacts. This optimized image processing and the resultant minimization of supplementary investigations was calculated to correspond to a theoretical dose reduction of about 50%. (orig.) [de

  9. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  10. Fully automated rodent brain MR image processing pipeline on a Midas server: from acquired images to region-based statistics.

    Science.gov (United States)

    Budin, Francois; Hoogstoel, Marion; Reynolds, Patrick; Grauer, Michael; O'Leary-Moore, Shonagh K; Oguz, Ipek

    2013-01-01

    Magnetic resonance imaging (MRI) of rodent brains enables study of the development and the integrity of the brain under certain conditions (alcohol, drugs etc.). However, these images are difficult to analyze for biomedical researchers with limited image processing experience. In this paper we present an image processing pipeline running on a Midas server, a web-based data storage system. It is composed of the following steps: rigid registration, skull-stripping, average computation, average parcellation, parcellation propagation to individual subjects, and computation of region-based statistics on each image. The pipeline is easy to configure and requires very little image processing knowledge. We present results obtained by processing a data set using this pipeline and demonstrate how this pipeline can be used to find differences between populations.

  11. Rain: A New Concurrent Process-Oriented Programming Language

    OpenAIRE

    Brown, Neil C.C.

    2006-01-01

    This paper details the design of a new concurrent process-oriented programming language, Rain. The language borrows heavily from occam-p and C++ to create a new language based on process-oriented programming, marrying channel-based communication, a clear division between statement and expression, and elements of functional programming. An expressive yet simple type system, coupled with templates, underpins the language. Modern features such as Unicode support and 64-bit integers are included ...

  12. Anniversary Paper: Image processing and manipulation through the pages of Medical Physics

    International Nuclear Information System (INIS)

    Armato, Samuel G. III; Ginneken, Bram van

    2008-01-01

    The language of radiology has gradually evolved from ''the film'' (the foundation of radiology since Wilhelm Roentgen's 1895 discovery of x-rays) to ''the image,'' an electronic manifestation of a radiologic examination that exists within the bits and bytes of a computer. Rather than simply storing and displaying radiologic images in a static manner, the computational power of the computer may be used to enhance a radiologist's ability to visually extract information from the image through image processing and image manipulation algorithms. Image processing tools provide a broad spectrum of opportunities for image enhancement. Gray-level manipulations such as histogram equalization, spatial alterations such as geometric distortion correction, preprocessing operations such as edge enhancement, and enhanced radiography techniques such as temporal subtraction provide powerful methods to improve the diagnostic quality of an image or to enhance structures of interest within an image. Furthermore, these image processing algorithms provide the building blocks of more advanced computer vision methods. The prominent role of medical physicists and the AAPM in the advancement of medical image processing methods, and in the establishment of the ''image'' as the fundamental entity in radiology and radiation oncology, has been captured in 35 volumes of Medical Physics.

  13. Surface Distresses Detection of Pavement Based on Digital Image Processing

    OpenAIRE

    Ouyang , Aiguo; Luo , Chagen; Zhou , Chao

    2010-01-01

    International audience; Pavement crack is the main form of early diseases of pavement. The use of digital photography to record pavement images and subsequent crack detection and classification has undergone continuous improvements over the past decade. Digital image processing has been applied to detect the pavement crack for its advantages of large amount of information and automatic detection. The applications of digital image processing in pavement crack detection, distresses classificati...

  14. Bubble feature extracting based on image processing of coal flotation froth

    Energy Technology Data Exchange (ETDEWEB)

    Wang, F.; Wang, Y.; Lu, M.; Liu, W. [China University of Mining and Technology, Beijing (China). Dept of Chemical Engineering and Environment

    2001-11-01

    Using image processing the contrast ratio between the bubble on the surface of flotation froth and the image background was enhanced, and the edges of bubble were extracted. Thus a model about the relation between the statistic feature of the bubbles in the image and the cleaned coal can be established. It is feasible to extract the bubble by processing the froth image of coal flotation on the basis of analysing the shape of the bubble. By means of processing the 51 group images sampled from laboratory column, it is thought that the use of the histogram equalization of image gradation and the medium filtering can obviously improve the dynamic contrast range and the brightness of bubbles. Finally, the method of threshold value cut and the bubble edge detecting for extracting the bubble were also discussed to describe the bubble feature, such as size and shape, in the froth image and to distinguish the froth image of coal flotation. 6 refs., 3 figs.

  15. Automated system for acquisition and image processing for the control and monitoring boned nopal

    Science.gov (United States)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  16. Decorative design of ceramic tiles adapted to inkjet printing employing digital image processing

    International Nuclear Information System (INIS)

    Defez, B.; Santiago-Praderas, V.; Lluna, E.; Peris-Fajarnes, G.; Dunai, E.

    2013-01-01

    The ceramic tile sector is a very competitive industry. The designer's proficiency to offer new models of the decorated surface, adapted to the production means, plays a very important role in the competitiveness. In the present work, we analyze the evolution of the design process in the ceramic sector, as much as the changes experimented in parallel by the printing equipment. Afterwards, we present a new concept of ceramic design, based on digital image processing. This technique allows the generation of homogeneous and non-repetitive designs for large surfaces, especially thought for inkjet printing. With the programmed algorithms we have compiled a prototype software for the assistance of the ceramic design. This tool allows creating continuous designs for large surfaces saving developing time. (Author)

  17. The study of image processing of parallel digital signal processor

    International Nuclear Information System (INIS)

    Liu Jie

    2000-01-01

    The author analyzes the basic characteristic of parallel DSP (digital signal processor) TMS320C80 and proposes related optimized image algorithm and the parallel processing method based on parallel DSP. The realtime for many image processing can be achieved in this way

  18. New method for identifying features of an image on a digital video display

    Science.gov (United States)

    Doyle, Michael D.

    1991-04-01

    The MetaMap process extends the concept of direct manipulation human-computer interfaces to new limits. Its specific capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. The correlation is accomplished through reprogramming of both the color map and the image so that discrete image elements comprise unique sets of color indices. This process allows the correlation to be accomplished with very efficient data storage and program execution times. Image databases adapted to this process become object-oriented as a result. Very sophisticated interrelationships can be set up between images text and program control mechanisms using this process. An application of this interfacing process to the design of an interactive atlas of medical histology as well as other possible applications are described. The MetaMap process is protected by U. S. patent #4

  19. SOLTECH 92 proceedings: Solar Process Heat Program

    Energy Technology Data Exchange (ETDEWEB)

    1992-03-01

    This document is a limited Proceedings, documenting the presentations given at the symposia conducted by the US Department of Energy's (DOE) Solar Industrial Program and Solar Thermal Electrical Program at SOLTECH92. The SOLTECH92 national solar energy conference was held in Albuquerque, New Mexico during the period February 17--20, 1992. The National Renewable Energy Laboratory manages the Solar Industrial Program; Sandia National Laboratories (Albuquerque) manages the Solar Thermal Electric Program. The symposia sessions were as follows: (1) Solar Industrial Program and Solar Thermal Electric Program Overviews, (2) Solar Process Heat Applications, (3) Solar Decontamination of Water and Soil; (4) Solar Building Technologies, (5) Solar Thermal Electric Systems, (6) PV Applications and Technologies. For each presentation given in these symposia, these Proceedings provide a one- to two-page abstract and copies of the viewgraphs and/or 35mm slides utilized by the speaker. Some speakers provided additional materials in the interest of completeness. The materials presented in this document were not subjected to a peer review process.

  20. Recent Advances in Techniques for Hyperspectral Image Processing

    Science.gov (United States)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; hide

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  1. Processing Visual Images

    International Nuclear Information System (INIS)

    Litke, Alan

    2006-01-01

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  2. Conceptualization, Cognitive Process between Image and Word

    Directory of Open Access Journals (Sweden)

    Aurel Ion Clinciu

    2009-12-01

    Full Text Available The study explores the process of constituting and organizing the system of concepts. After a comparative analysis of image and concept, conceptualization is reconsidered through raising for discussion the relations of concept with image in general and with self-image mirrored in body schema in particular. Taking into consideration the notion of mental space, there is developed an articulated perspective on conceptualization which has the images of mental space at one pole and the categories of language and operations of thinking at the other pole. There are explored the explicative possibilities of the notion of Tversky’s diagrammatic space as an element which is necessary to understand the genesis of graphic behaviour and to define a new construct, graphic intelligence.

  3. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.

    Science.gov (United States)

    Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han

    2017-09-07

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  4. Image processing of angiograms: A pilot study

    Science.gov (United States)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  5. Fast processing of microscopic images using object-based extended depth of field.

    Science.gov (United States)

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades

    2016-12-22

    Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This

  6. High Throughput Multispectral Image Processing with Applications in Food Science.

    Directory of Open Access Journals (Sweden)

    Panagiotis Tsakanikas

    Full Text Available Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  7. High Throughput Multispectral Image Processing with Applications in Food Science.

    Science.gov (United States)

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  8. Quantitative analysis of geomorphic processes using satellite image data at different scales

    Science.gov (United States)

    Williams, R. S., Jr.

    1985-01-01

    When aerial and satellite photographs and images are used in the quantitative analysis of geomorphic processes, either through direct observation of active processes or by analysis of landforms resulting from inferred active or dormant processes, a number of limitations in the use of such data must be considered. Active geomorphic processes work at different scales and rates. Therefore, the capability of imaging an active or dormant process depends primarily on the scale of the process and the spatial-resolution characteristic of the imaging system. Scale is an important factor in recording continuous and discontinuous active geomorphic processes, because what is not recorded will not be considered or even suspected in the analysis of orbital images. If the geomorphic process of landform change caused by the process is less than 200 m in x to y dimension, then it will not be recorded. Although the scale factor is critical, in the recording of discontinuous active geomorphic processes, the repeat interval of orbital-image acquisition of a planetary surface also is a consideration in order to capture a recurring short-lived geomorphic process or to record changes caused by either a continuous or a discontinuous geomorphic process.

  9. A program for phase identification using diffractograms obtained from Tem structure images

    Energy Technology Data Exchange (ETDEWEB)

    Galicia, R.; Herrera, R.; Rius, J. L.; Zorrilla, C.; Gomez, A., E-mail: rherrera@fisica.unam.mx [UNAM, Instituto de Fisica, Apdo. Postal 20-364, 01000 Mexico D. F. (Mexico)

    2013-05-01

    In this work a computer program for the indexing of diffractograms is presented. The diffractograms are obtained by means of a discrete Fourier transform from high resolution electron microscope images. The program requires the use of X-ray diffraction data files together with a fast Fourier transform program, for this purpose we used the Digital Micrograph software. (Author)

  10. Process-Oriented Parallel Programming with an Application to Data-Intensive Computing

    OpenAIRE

    Givelberg, Edward

    2014-01-01

    We introduce process-oriented programming as a natural extension of object-oriented programming for parallel computing. It is based on the observation that every class of an object-oriented language can be instantiated as a process, accessible via a remote pointer. The introduction of process pointers requires no syntax extension, identifies processes with programming objects, and enables processes to exchange information simply by executing remote methods. Process-oriented programming is a h...

  11. A Visual Environment for Real-Time Image Processing in Hardware (VERTIPH

    Directory of Open Access Journals (Sweden)

    Johnston CT

    2006-01-01

    Full Text Available Real-time video processing is an image-processing application that is ideally suited to implementation on FPGAs. We discuss the strengths and weaknesses of a number of existing languages and hardware compilers that have been developed for specifying image processing algorithms on FPGAs. We propose VERTIPH, a new multiple-view visual language that avoids the weaknesses we identify. A VERTIPH design incorporates three different views, each tailored to a different aspect of the image processing system under development; an overall architectural view, a computational view, and a resource and scheduling view.

  12. Monitoring of pellet coating process with image analysis—a feasibility study

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey; Esbensen, Kim; Bogomolov, Andrey

    2010-01-01

    of the process samples appearance, besides measurable distances, that may be connected to the information of interest. In the present paper, the methods of image analysis were applied to at-line monitoring of fluid bed pellet coating process. The quantitative description of images of pellet samples, taken from...... different process stages, has been obtained using two different approaches: wavelet decomposition and angle measure technique (AMT). Both methods revealed a strong correlation between image features and process parameters. However, the AMT results turned out to be more accurate and stable. It has been shown...

  13. Image processing for drift compensation in fluorescence microscopy

    DEFF Research Database (Denmark)

    Petersen, Steffen; Thiagarajan, Viruthachalam; Coutinho, Isabel

    2013-01-01

    Fluorescence microscopy is characterized by low background noise, thus a fluorescent object appears as an area of high signal/noise. Thermal gradients may result in apparent motion of the object, leading to a blurred image. Here, we have developed an image processing methodology that may remove....../reduce blur significantly for any type of microscopy. A total of ~100 images were acquired with a pixel size of 30 nm. The acquisition time for each image was approximately 1second. We can quantity the drift in X and Y using the sub pixel accuracy computed centroid location of an image object in each frame....... We can measure drifts down to approximately 10 nm in size and a drift-compensated image can therefore be reconstructed on a grid of the same size using the “Shift and Add” approach leading to an image of identical size asthe individual image. We have also reconstructed the image using a 3 fold larger...

  14. Multiresolution approach to processing images for different applications interaction of lower processing with higher vision

    CERN Document Server

    Vujović, Igor

    2015-01-01

    This book presents theoretical and practical aspects of the interaction between low and high level image processing. Multiresolution analysis owes its popularity mostly to wavelets and is widely used in a variety of applications. Low level image processing is important for the performance of many high level applications. The book includes examples from different research fields, i.e. video surveillance; biomedical applications (EMG and X-ray); improved communication, namely teleoperation, telemedicine, animation, augmented/virtual reality and robot vision; monitoring of the condition of ship systems and image quality control.

  15. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  16. Performance management of high performance computing for medical image processing in Amazon Web Services

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  17. Image processing with personal computer

    International Nuclear Information System (INIS)

    Hara, Hiroshi; Handa, Madoka; Watanabe, Yoshihiko

    1990-01-01

    The method of automating the judgement works using photographs in radiation nondestructive inspection with a simple type image processor on the market was examined. The software for defect extraction and making binary and the software for automatic judgement were made for trial, and by using the various photographs on which the judgement was already done as the object, the accuracy and the problematic points were tested. According to the state of the objects to be photographed and the condition of inspection, the accuracy of judgement from 100% to 45% was obtained. The criteria for judgement were in conformity with the collection of reference photographs made by Japan Cast Steel Association. In the non-destructive inspection by radiography, the number and size of the defect images in photographs are visually judged, the results are collated with the standard, and the quality is decided. Recently, the technology of image processing with personal computers advanced, therefore by utilizing this technology, the automation of the judgement of photographs was attempted to improve the accuracy, to increase the inspection efficiency and to realize labor saving. (K.I.)

  18. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  19. PROCESSING OF DIGITAL IMAGES OF INDUSTRIAL OBJECT SURFACES DURING NON-DESTRUCTIVE TESTING

    Directory of Open Access Journals (Sweden)

    A. A. Hundzin

    2016-01-01

    Full Text Available The paper presents modern approaches to processing of images obtained with the help of industrial equipment. Usage of pixel modification in small neighborhoods, application of uniform image processing while changing brightness level, possibilities for combination of several images, threshold image processing have been described in the paper. While processing a number of images on a metal structure containing micro-cracks and being under strain difference between two such images have been determined in the paper. The metal structure represents a contour specifying the difference in images. An analysis of the contour makes it possible to determine initial direction of crack propagation in the metal. A threshold binarization value has been determined while processing the image having a field of medium intensity which are disappearing in the process of simple binarization and merging with the background due to rather small drop between the edges. In this regard an algorithm of a balanced threshold histogram clipping has been selected and it is based on the following approach: two different histogram fractions are “weighed” and if one of the fractions “outweighs” then last column of the histogram fraction is removed and the procedure is repeated again. When there is rather high threshold value a contour break (disappearance of informative pixels may occur, and when there is a low threshold value – a noise (non-informative pixels may appear. The paper shows implementation of an algorithm for location of contact pads on image of semiconductor crystal. Algorithms for morphological processing of production prototype images have been obtained in the paper and these algorithms permit to detect defects on the surface of semiconductors, to carry out filtration, threshold binarization that presupposes application of an algorithm of a balanced threshold histogram clipping. The developed approaches can be used to highlight contours on the surface

  20. Anniversary Paper: Image processing and manipulation through the pages of Medical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Armato, Samuel G. III; Ginneken, Bram van [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States); Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, Room Q0S.459, 3584 CX Utrecht (Netherlands)

    2008-10-15

    The language of radiology has gradually evolved from ''the film'' (the foundation of radiology since Wilhelm Roentgen's 1895 discovery of x-rays) to ''the image,'' an electronic manifestation of a radiologic examination that exists within the bits and bytes of a computer. Rather than simply storing and displaying radiologic images in a static manner, the computational power of the computer may be used to enhance a radiologist's ability to visually extract information from the image through image processing and image manipulation algorithms. Image processing tools provide a broad spectrum of opportunities for image enhancement. Gray-level manipulations such as histogram equalization, spatial alterations such as geometric distortion correction, preprocessing operations such as edge enhancement, and enhanced radiography techniques such as temporal subtraction provide powerful methods to improve the diagnostic quality of an image or to enhance structures of interest within an image. Furthermore, these image processing algorithms provide the building blocks of more advanced computer vision methods. The prominent role of medical physicists and the AAPM in the advancement of medical image processing methods, and in the establishment of the ''image'' as the fundamental entity in radiology and radiation oncology, has been captured in 35 volumes of Medical Physics.