WorldWideScience

Sample records for image processing analysis

  1. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  2. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation.......The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  3. An image processing analysis of skin textures

    CERN Document Server

    Sparavigna, A

    2008-01-01

    Colour and coarseness of skin are visually different. When image processing is involved in the skin analysis, it is important to quantitatively evaluate such differences using texture features. In this paper, we discuss a texture analysis and measurements based on a statistical approach to the pattern recognition. Grain size and anisotropy are evaluated with proper diagrams. The possibility to determine the presence of pattern defects is also discussed.

  4. Digital-image processing and image analysis of glacier ice

    Science.gov (United States)

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  5. Analysis of physical processes via imaging vectors

    Science.gov (United States)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  6. Image Processing and Analysis for DTMRI

    Directory of Open Access Journals (Sweden)

    Kondapalli Srinivasa Vara Prasad

    2012-01-01

    Full Text Available This paper describes image processing techniques for Diffusion Tensor Magnetic Resonance. In Diffusion Tensor MRI, a tensor describing local water diffusion is acquired for each voxel. The geometric nature of the diffusion tensors can quantitatively characterize the local structure in tissues such as bone, muscles, and white matter of the brain. The close relationship between local image structure and apparent diffusion makes this image modality very interesting for medical image analysis. We present a decomposition of the diffusion tensor based on its symmetry properties resulting in useful measures describing the geometry of the diffusion ellipsoid. A simple anisotropy measure follows naturally from this analysis. We describe how the geometry, or shape, of the tensor can be visualized using a coloring scheme based on the derived shape measures. We show how filtering of the tensor data of a human brain can provide a description of macro structural diffusion which can be used for measures of fiber-tract organization. We also describe how tracking of white matter tracts can be implemented using the introduced methods. These methods offers unique tools for the in vivo demonstration of neural connectivity in healthy and diseased brain tissue.

  7. Image processing

    NARCIS (Netherlands)

    Heijden, van der F.; Spreeuwers, L.J.; Blanken, H.M.; Vries de, A.P.; Blok, H.E.; Feng, L

    2007-01-01

    The field of image processing addresses handling and analysis of images for many purposes using a large number of techniques and methods. The applications of image processing range from enhancement of the visibility of cer- tain organs in medical images to object recognition for handling by industri

  8. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  9. Image processing and analysis with graphs theory and practice

    CERN Document Server

    Lézoray, Olivier

    2012-01-01

    Covering the theoretical aspects of image processing and analysis through the use of graphs in the representation and analysis of objects, Image Processing and Analysis with Graphs: Theory and Practice also demonstrates how these concepts are indispensible for the design of cutting-edge solutions for real-world applications. Explores new applications in computational photography, image and video processing, computer graphics, recognition, medical and biomedical imaging With the explosive growth in image production, in everything from digital photographs to medical scans, there has been a drast

  10. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  11. Imaging Heat and Mass Transfer Processes Visualization and Analysis

    CERN Document Server

    Panigrahi, Pradipta Kumar

    2013-01-01

    Imaging Heat and Mass Transfer Processes: Visualization and Analysis applies Schlieren and shadowgraph techniques to complex heat and mass transfer processes. Several applications are considered where thermal and concentration fields play a central role. These include vortex shedding and suppression from stationary and oscillating bluff bodies such as cylinders, convection around crystals growing from solution, and buoyant jets. Many of these processes are unsteady and three dimensional. The interpretation and analysis of images recorded are discussed in the text.

  12. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  13. Theoretical analysis of radiographic images by nonstationary Poisson processes

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, K.; Uchida, S. (Gifu Univ. (Japan)); Yamada, I.

    1980-12-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process.

  14. Theoretical Analysis of Radiographic Images by Nonstationary Poisson Processes

    Science.gov (United States)

    Tanaka, Kazuo; Yamada, Isao; Uchida, Suguru

    1980-12-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples of the one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process.

  15. Digital image processing and analysis for activated sludge wastewater treatment.

    Science.gov (United States)

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  16. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2009-01-01

    htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects throug

  17. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  18. 3D Images of Materials Structures Processing and Analysis

    CERN Document Server

    Ohser, Joachim

    2009-01-01

    Taking and analyzing images of materials' microstructures is essential for quality control, choice and design of all kind of products. Today, the standard method still is to analyze 2D microscopy images. But, insight into the 3D geometry of the microstructure of materials and measuring its characteristics become more and more prerequisites in order to choose and design advanced materials according to desired product properties. This first book on processing and analysis of 3D images of materials structures describes how to develop and apply efficient and versatile tools for geometric analysis

  19. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  20. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    An estimate of the thickness of subcutaneous adipose tissue at differing positions around the body was required in a study examining body composition. To eliminate human error associated with the manual placement of markers for measurements and to facilitate the collection of data from a large...... number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...

  1. Image processing and analysis using neural networks for optometry area

    Science.gov (United States)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  2. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets.

  3. Spectral Image Processing and Analysis of the Archimedes Palimpsest

    Science.gov (United States)

    2011-09-01

    SPECTRAL IMAGE PROCESSING AND ANALYSIS OF THE ARCHIMEDES PALIMPSEST Roger L. Easton, Jr., William A. Christens-Barry, Keith T. Knox Chester F...5988 (fax), e-mail: easton@cis.rit.edu web: www.cis.rit.edu/people/faculty/easton ABSTRACT The Archimedes Palimpsest is a 10th-century parchment...rendering. 1. SIGNIFICANCE OF THE CODEX Almost everything known about the work of Archimedes has been gleaned from three codex manuscripts. The first

  4. DPABI: Data Processing & Analysis for (Resting-State) Brain Imaging.

    Science.gov (United States)

    Yan, Chao-Gan; Wang, Xin-Di; Zuo, Xi-Nian; Zang, Yu-Feng

    2016-07-01

    Brain imaging efforts are being increasingly devoted to decode the functioning of the human brain. Among neuroimaging techniques, resting-state fMRI (R-fMRI) is currently expanding exponentially. Beyond the general neuroimaging analysis packages (e.g., SPM, AFNI and FSL), REST and DPARSF were developed to meet the increasing need of user-friendly toolboxes for R-fMRI data processing. To address recently identified methodological challenges of R-fMRI, we introduce the newly developed toolbox, DPABI, which was evolved from REST and DPARSF. DPABI incorporates recent research advances on head motion control and measurement standardization, thus allowing users to evaluate results using stringent control strategies. DPABI also emphasizes test-retest reliability and quality control of data processing. Furthermore, DPABI provides a user-friendly pipeline analysis toolkit for rat/monkey R-fMRI data analysis to reflect the rapid advances in animal imaging. In addition, DPABI includes preprocessing modules for task-based fMRI, voxel-based morphometry analysis, statistical analysis and results viewing. DPABI is designed to make data analysis require fewer manual operations, be less time-consuming, have a lower skill requirement, a smaller risk of inadvertent mistakes, and be more comparable across studies. We anticipate this open-source toolbox will assist novices and expert users alike and continue to support advancing R-fMRI methodology and its application to clinical translational studies.

  5. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    Science.gov (United States)

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  6. Image processing analysis of traditional Gestalt vision experiments

    Science.gov (United States)

    McCann, John J.

    2002-06-01

    In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.

  7. Unraveling cell processes: interference imaging interwoven with data analysis

    DEFF Research Database (Denmark)

    Brazhe, Nadezda; Brazhe, Alexey; Pavlov, A N

    2006-01-01

    The paper presents results on the application of interference microscopy and wavelet-analysis for cell visualization and studies of cell dynamics. We demonstrate that interference imaging of erythrocytes can reveal reorganization of the cytoskeleton and inhomogenity in the distribution of hemoglo......The paper presents results on the application of interference microscopy and wavelet-analysis for cell visualization and studies of cell dynamics. We demonstrate that interference imaging of erythrocytes can reveal reorganization of the cytoskeleton and inhomogenity in the distribution...... properties differ from cell type to cell type and depend on the cellular compartment. Our results suggest that low frequency variations (0.1-0.6 Hz) result from plasma membrane processes and that higher frequency variations (20-26 Hz) are related to the movement of vesicles. Using double-wavelet analysis, we...... study the modulation of the 1 Hz rhythm in neurons and reveal its changes under depolarization and hyperpolarization of the plasma membrane. We conclude that interference microscopy combined with wavelet analysis is a useful technique for non-invasive cell studies, cell visualization, and investigation...

  8. Post-processing for statistical image analysis in light microscopy.

    Science.gov (United States)

    Cardullo, Richard A; Hinchcliffe, Edward H

    2013-01-01

    Image processing of images serves a number of important functions including noise reduction, contrast enhancement, and feature extraction. Whatever the final goal, an understanding of the nature of image acquisition and digitization and subsequent mathematical manipulations of that digitized image is essential. Here we discuss the basic mathematical and statistical processes that are routinely used by microscopists to routinely produce high quality digital images and to extract key features of interest using a variety of extraction and thresholding tools. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Mesh Processing in Medical-Image Analysis-a Tutorial

    DEFF Research Database (Denmark)

    Levine, Joshua A.; Paulsen, Rasmus Reinhold; Zhang, Yongjie

    2012-01-01

    Medical-image analysis requires an understanding of sophisticated scanning modalities, constructing geometric models, building meshes to represent domains, and downstream biological applications. These four steps form an image-to-mesh pipeline. For research in this field to progress, the imaging...

  10. Human movement analysis with image processing in real time

    Science.gov (United States)

    Fauvet, Eric; Paindavoine, Michel; Cannard, F.

    1991-04-01

    In the field of the human sciences, a lot of applications needs to know the kinematic characteristics of the human movements Psycology is associating the characteristics with the control mechanism, sport and biomechariics are associating them with the performance of the sportman or of the patient. So the trainers or the doctors can correct the gesture of the subject to obtain a better performance if he knows the motion properties. Roherton's studies show the children motion evolution2 . Several investigations methods are able to measure the human movement But now most of the studies are based on image processing. Often the systems are working at the T.V. standard (50 frame per secund ). they permit only to study very slow gesture. A human operator analyses the digitizing sequence of the film manually giving a very expensive, especially long and unprecise operation. On these different grounds many human movement analysis systems were implemented. They consist of: - markers which are fixed to the anatomical interesting points on the subject in motion, - Image compression which is the art to coding picture data. Generally the compression Is limited to the centroid coordinates calculation tor each marker. These systems differ from one other in image acquisition and markers detection.

  11. Automated microstructural analysis of titanium alloys using digital image processing

    Science.gov (United States)

    Campbell, A.; Murray, P.; Yakushina, E.; Marshall, S.; Ion, W.

    2017-02-01

    Titanium is a material that exhibits many desirable properties including a very high strength to weight ratio and corrosive resistance. However, the specific properties of any components depend upon the microstructure of the material, which varies by the manufacturing process. This means it is often necessary to analyse the microstructure when designing new processes or performing quality assurance on manufactured parts. For Ti6Al4V, grain size analysis is typically performed manually by expert material scientists as the complicated microstructure of the material means that, to the authors knowledge, no existing software reliably identifies the grain boundaries. This manual process is time consuming and offers low repeatability due to human error and subjectivity. In this paper, we propose a new, automated method to segment microstructural images of a Ti6Al4V alloy into its constituent grains and produce measurements. The results of applying this technique are evaluated by comparing the measurements obtained by different analysis methods. By using measurements from a complete manual segmentation as a benchmark we explore the reliability of the current manual estimations of grain size and contrast this with improvements offered by our approach.

  12. Image Processing

    Science.gov (United States)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  13. A novel automatic image processing algorithm for detection of hard exudates based on retinal image analysis.

    Science.gov (United States)

    Sánchez, Clara I; Hornero, Roberto; López, María I; Aboy, Mateo; Poza, Jesús; Abásolo, Daniel

    2008-04-01

    We present an automatic image processing algorithm to detect hard exudates. Automatic detection of hard exudates from retinal images is an important problem since hard exudates are associated with diabetic retinopathy and have been found to be one of the most prevalent earliest signs of retinopathy. The algorithm is based on Fisher's linear discriminant analysis and makes use of colour information to perform the classification of retinal exudates. We prospectively assessed the algorithm performance using a database containing 58 retinal images with variable colour, brightness, and quality. Our proposed algorithm obtained a sensitivity of 88% with a mean number of 4.83+/-4.64 false positives per image using the lesion-based performance evaluation criterion, and achieved an image-based classification accuracy of 100% (sensitivity of 100% and specificity of 100%).

  14. Automated Image Processing for the Analysis of DNA Repair Dynamics

    CERN Document Server

    Riess, Thorsten; Tomas, Martin; Ferrando-May, Elisa; Merhof, Dorit

    2011-01-01

    The efficient repair of cellular DNA is essential for the maintenance and inheritance of genomic information. In order to cope with the high frequency of spontaneous and induced DNA damage, a multitude of repair mechanisms have evolved. These are enabled by a wide range of protein factors specifically recognizing different types of lesions and finally restoring the normal DNA sequence. This work focuses on the repair factor XPC (xeroderma pigmentosum complementation group C), which identifies bulky DNA lesions and initiates their removal via the nucleotide excision repair pathway. The binding of XPC to damaged DNA can be visualized in living cells by following the accumulation of a fluorescent XPC fusion at lesions induced by laser microirradiation in a fluorescence microscope. In this work, an automated image processing pipeline is presented which allows to identify and quantify the accumulation reaction without any user interaction. The image processing pipeline comprises a preprocessing stage where the ima...

  15. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    Science.gov (United States)

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  16. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research.

    Science.gov (United States)

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris

    2017-01-12

    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at http://medisp.bme.teiath.gr/hicl/ . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.

  17. Analysis of Fiber deposition using Automatic Image Processing Method

    Science.gov (United States)

    Belka, M.; Lizal, F.; Jedelsky, J.; Jicha, M.

    2013-04-01

    Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  18. Analysis of Fiber deposition using Automatic Image Processing Method

    Directory of Open Access Journals (Sweden)

    Jicha M.

    2013-04-01

    Full Text Available Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  19. Methods in Astronomical Image Processing

    Science.gov (United States)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  20. The interpretation of X-ray computed microtomography images of rocks as an application of volume image processing and analysis

    OpenAIRE

    Kaczmarczyk, J.; Dohnalik, M; Zalewska, J; Cnudde, Veerle

    2010-01-01

    X-ray computed microtomography (CMT) is a non-destructive method of investigating internal structure of examined objects. During the reconstruction of CMT measurement data, large volume images are generated. Therefore, the image processing and analysis are very important steps in CMT data interpretation. The first step in analyzing the rocks is image segmentation. The differences in density are shown on the reconstructed image as the differences in gray level of voxel, so the proper threshold...

  1. CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly-Parallel Image-Analysis Algorithms

    CERN Document Server

    Mighell, Kenneth John

    2010-01-01

    The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called CRBLASTER, which does cosmic-ray rejection of CCD (charge-coupled device) images using the embarrassingly-parallel L.A.COSMIC algorithm. CRBLASTER is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analy...

  2. Processing and statistical analysis of soil-root images

    Science.gov (United States)

    Razavi, Bahar S.; Hoang, Duyen; Kuzyakov, Yakov

    2016-04-01

    Importance of the hotspots such as rhizosphere, the small soil volume that surrounds and is influenced by plant roots, calls for spatially explicit methods to visualize distribution of microbial activities in this active site (Kuzyakov and Blagodatskaya, 2015). Zymography technique has previously been adapted to visualize the spatial dynamics of enzyme activities in rhizosphere (Spohn and Kuzyakov, 2014). Following further developing of soil zymography -to obtain a higher resolution of enzyme activities - we aimed to 1) quantify the images, 2) determine whether the pattern (e.g. distribution of hotspots in space) is clumped (aggregated) or regular (dispersed). To this end, we incubated soil-filled rhizoboxes with maize Zea mays L. and without maize (control box) for two weeks. In situ soil zymography was applied to visualize enzymatic activity of β-glucosidase and phosphatase at soil-root interface. Spatial resolution of fluorescent images was improved by direct application of a substrate saturated membrane to the soil-root system. Furthermore, we applied "spatial point pattern analysis" to determine whether the pattern (e.g. distribution of hotspots in space) is clumped (aggregated) or regular (dispersed). Our results demonstrated that distribution of hotspots at rhizosphere is clumped (aggregated) compare to control box without plant which showed regular (dispersed) pattern. These patterns were similar in all three replicates and for both enzymes. We conclude that improved zymography is promising in situ technique to identify, analyze, visualize and quantify spatial distribution of enzyme activities in the rhizosphere. Moreover, such different patterns should be considered in assessments and modeling of rhizosphere extension and the corresponding effects on soil properties and functions. Key words: rhizosphere, spatial point pattern, enzyme activity, zymography, maize.

  3. Natural Language Processing Versus Content-Based Image Analysis for Medical Document Retrieval.

    Science.gov (United States)

    Névéol, Aurélie; Deserno, Thomas M; Darmoni, Stéfan J; Güld, Mark Oliver; Aronson, Alan R

    2008-09-18

    One of the most significant recent advances in health information systems has been the shift from paper to electronic documents. While research on automatic text and image processing has taken separate paths, there is a growing need for joint efforts, particularly for electronic health records and biomedical literature databases. This work aims at comparing text-based versus image-based access to multimodal medical documents using state-of-the-art methods of processing text and image components. A collection of 180 medical documents containing an image accompanied by a short text describing it was divided into training and test sets. Content-based image analysis and natural language processing techniques are applied individually and combined for multimodal document analysis. The evaluation consists of an indexing task and a retrieval task based on the "gold standard" codes manually assigned to corpus documents. The performance of text-based and image-based access, as well as combined document features, is compared. Image analysis proves more adequate for both the indexing and retrieval of the images. In the indexing task, multimodal analysis outperforms both independent image and text analysis. This experiment shows that text describing images can be usefully analyzed in the framework of a hybrid text/image retrieval system.

  4. The methodology of wavelet analysis as a tool for cytology preparations image processing

    Directory of Open Access Journals (Sweden)

    Vyacheslav V. Lyashenko

    2016-09-01

    Conclusion: Consider the possibility and feasibility issues of applying wavelet analysis for processing cytology preparations images. This improves the quality of the analysis of cytology preparations images. This allows the to properly diagnose. [Cukurova Med J 2016; 41(3.000: 453-463

  5. COMPARATIVE ANALYSIS OF SATELLITE IMAGE PRE-PROCESSING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    T. Sree Sharmila

    2013-01-01

    Full Text Available Satellite images are corrupted by noise in its acquisition and transmission. The removal of noise from the image by attenuating the high frequency image components, removes some important details as well. In order to retain the useful information and improve the visual appearance, an effective denoising and resolution enhancement techniques are required. In this research, Hybrid Directional Lifting (HDL technique is proposed to retain the important details of the image and improve the visual appearance. The Discrete Wavelet Transform (DWT based interpolation technique is developed for enhancing the resolution of the denoised image. The performance of the proposed techniques are tested by Land Remote-Sensing Satellite (LANDSAT images, using the quantitative performance measure, Peak Signal to Noise Ratio (PSNR and computation time to show the significance of the proposed techniques. The PSNR of the HDL technique increases 1.02 dB compared to the standard denoising technique and the DWT based interpolation technique increases 3.94 dB. From the experimental results it reveals that newly developed image denoising and resolution enhancement techniques improve the image visual quality with rich textures.

  6. Image acquisitions, processing and analysis in the process of obtaining characteristics of horse navicular bone

    Science.gov (United States)

    Zaborowicz, M.; Włodarek, J.; Przybylak, A.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Boniecki, P.; Koszela, K.; Przybył, J.; Skwarcz, J.

    2015-07-01

    The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.

  7. Quantitative analysis of geomorphic processes using satellite image data at different scales

    Science.gov (United States)

    Williams, R. S., Jr.

    1985-01-01

    When aerial and satellite photographs and images are used in the quantitative analysis of geomorphic processes, either through direct observation of active processes or by analysis of landforms resulting from inferred active or dormant processes, a number of limitations in the use of such data must be considered. Active geomorphic processes work at different scales and rates. Therefore, the capability of imaging an active or dormant process depends primarily on the scale of the process and the spatial-resolution characteristic of the imaging system. Scale is an important factor in recording continuous and discontinuous active geomorphic processes, because what is not recorded will not be considered or even suspected in the analysis of orbital images. If the geomorphic process of landform change caused by the process is less than 200 m in x to y dimension, then it will not be recorded. Although the scale factor is critical, in the recording of discontinuous active geomorphic processes, the repeat interval of orbital-image acquisition of a planetary surface also is a consideration in order to capture a recurring short-lived geomorphic process or to record changes caused by either a continuous or a discontinuous geomorphic process.

  8. Image Processing Software

    Science.gov (United States)

    Bosio, M. A.

    1990-11-01

    ABSTRACT: A brief description of astronomical image software is presented. This software was developed in a Digital Micro Vax II Computer System. : St presenta una somera descripci6n del software para procesamiento de imagenes. Este software fue desarrollado en un equipo Digital Micro Vax II. : DATA ANALYSIS - IMAGE PROCESSING

  9. CT perfusion image processing: analysis of liver tumors

    OpenAIRE

    D’Antò, Michela

    2013-01-01

    Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies ab...

  10. Medical image processing

    CERN Document Server

    Dougherty, Geoff

    2011-01-01

    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  11. Analysis of a multiple reception model for processing images from the solid-state imaging camera

    Science.gov (United States)

    Yan, T.-Y.

    1991-01-01

    A detection model to identify the presence of Galileo optical communications from an Earth-based Transmitter (GOPEX) signal by processing multiple signal receptions extracted from the camera images is described. The model decomposes a multi-signal reception camera image into a set of images so that the location of the pixel being illuminated is known a priori and the laser can illuminate only one pixel at each reception instance. Numerical results show that if effects on the pointing error due to atmospheric refraction can be controlled to between 20 to 30 microrad, the beam divergence of the GOPEX laser should be adjusted to be between 30 to 40 microrad when the spacecraft is 30 million km away from Earth. Furthermore, increasing beyond 5 the number of receptions for processing will not produce a significant detection probability advantage.

  12. Irregularities and scaling in signal and image processing: multifractal analysis

    Science.gov (United States)

    Abry, Patrice; Jaffard, Herwig; Wendt, Stéphane

    2015-03-01

    B. Mandelbrot gave a new birth to the notions of scale invariance, self-similarity and non-integer dimensions, gathering them as the founding corner-stones used to build up fractal geometry. The first purpose of the present contribution is to review and relate together these key notions, explore their interplay and show that they are different facets of a single intuition. Second, we will explain how these notions lead to the derivation of the mathematical tools underlying multifractal analysis. Third, we will reformulate these theoretical tools into a wavelet framework, hence enabling their better theoretical understanding as well as their efficient practical implementation. B. Mandelbrot used his concept of fractal geometry to analyze real-world applications of very different natures. As a tribute to his work, applications of various origins, and where multifractal analysis proved fruitful, are revisited to illustrate the theoretical developments proposed here.

  13. Image-Processing Program

    Science.gov (United States)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  14. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin

    2011-01-01

    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  15. CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly Parallel Image-Analysis Algorithms

    Science.gov (United States)

    Mighell, Kenneth John

    2010-10-01

    The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called crblaster, which does cosmic-ray rejection of CCD images using the embarrassingly parallel l.a.cosmic algorithm. crblaster is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. crblaster uses a two-dimensional image partitioning algorithm that partitions an input image into N rectangular subimages of nearly equal area; the subimages include sufficient additional pixels along common image partition edges such that the need for communication between computer processes is eliminated. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly parallel algorithms. The crblaster source code is freely available at the official application Web site at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800 × 800 pixel Hubble Space Telescope WFPC2 image takes 44 s with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8 GHz quad-core Intel Xeon processors. crblaster is 7.4 times faster when processing the same image on a single core on the same machine. Processing the same image with crblaster simultaneously on all eight cores of the same machine takes 0.875 s—which is a speedup factor of 50.3 times faster than the

  16. Performance of an image analysis processing system for hen tracking in an environmental preference chamber.

    Science.gov (United States)

    Kashiha, Mohammad Amin; Green, Angela R; Sales, Tatiana Glogerley; Bahr, Claudia; Berckmans, Daniel; Gates, Richard S

    2014-10-01

    Image processing systems have been widely used in monitoring livestock for many applications, including identification, tracking, behavior analysis, occupancy rates, and activity calculations. The primary goal of this work was to quantify image processing performance when monitoring laying hens by comparing length of stay in each compartment as detected by the image processing system with the actual occurrences registered by human observations. In this work, an image processing system was implemented and evaluated for use in an environmental animal preference chamber to detect hen navigation between 4 compartments of the chamber. One camera was installed above each compartment to produce top-view images of the whole compartment. An ellipse-fitting model was applied to captured images to detect whether the hen was present in a compartment. During a choice-test study, mean ± SD success detection rates of 95.9 ± 2.6% were achieved when considering total duration of compartment occupancy. These results suggest that the image processing system is currently suitable for determining the response measures for assessing environmental choices. Moreover, the image processing system offered a comprehensive analysis of occupancy while substantially reducing data processing time compared with the time-intensive alternative of manual video analysis. The above technique was used to monitor ammonia aversion in the chamber. As a preliminary pilot study, different levels of ammonia were applied to different compartments while hens were allowed to navigate between compartments. Using the automated monitor tool to assess occupancy, a negative trend of compartment occupancy with ammonia level was revealed, though further examination is needed.

  17. Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method.

    Science.gov (United States)

    Lu, Zhaolin; Hu, Xiaojuan; Lu, Yao

    2017-01-01

    Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis.

  18. Toolkits and Software for Developing Biomedical Image Processing and Analysis Applications

    Science.gov (United States)

    Wolf, Ivo

    Solutions in biomedical image processing and analysis usually consist of much more than a single method. Typically, a whole pipeline of algorithms is necessary, combined with visualization components to display and verify the results as well as possibilities to interact with the data. Therefore, successful research in biomedical image processing and analysis requires a solid base to start from. This is the case regardless whether the goal is the development of a new method (e.g., for segmentation) or to solve a specific task (e.g., computer-assisted planning of surgery).

  19. An advanced software suite for the processing and analysis of silicon luminescence images

    Science.gov (United States)

    Payne, D. N. R.; Vargas, C.; Hameiri, Z.; Wenham, S. R.; Bagnall, D. M.

    2017-06-01

    Luminescence imaging is a versatile characterisation technique used for a broad range of research and industrial applications, particularly for the field of photovoltaics where photoluminescence and electroluminescence imaging is routinely carried out for materials analysis and quality control. Luminescence imaging can reveal a wealth of material information, as detailed in extensive literature, yet these techniques are often only used qualitatively instead of being utilised to their full potential. Part of the reason for this is the time and effort required for image processing and analysis in order to convert image data to more meaningful results. In this work, a custom built, Matlab based software suite is presented which aims to dramatically simplify luminescence image processing and analysis. The suite includes four individual programs which can be used in isolation or in conjunction to achieve a broad array of functionality, including but not limited to, point spread function determination and deconvolution, automated sample extraction, image alignment and comparison, minority carrier lifetime calibration and iron impurity concentration mapping.

  20. Principles of image processing in machine vision systems for the color analysis of minerals

    Science.gov (United States)

    Petukhova, Daria B.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Korotaev, Valery V.

    2014-09-01

    At the moment color sorting method is one of promising methods of mineral raw materials enrichment. This method is based on registration of color differences between images of analyzed objects. As is generally known the problem with delimitation of close color tints when sorting low-contrast minerals is one of the main disadvantages of color sorting method. It is can be related with wrong choice of a color model and incomplete image processing in machine vision system for realizing color sorting algorithm. Another problem is a necessity of image processing features reconfiguration when changing the type of analyzed minerals. This is due to the fact that optical properties of mineral samples vary from one mineral deposit to another. Therefore searching for values of image processing features is non-trivial task. And this task doesn't always have an acceptable solution. In addition there are no uniform guidelines for determining criteria of mineral samples separation. It is assumed that the process of image processing features reconfiguration had to be made by machine learning. But in practice it's carried out by adjusting the operating parameters which are satisfactory for one specific enrichment task. This approach usually leads to the fact that machine vision system unable to estimate rapidly the concentration rate of analyzed mineral ore by using color sorting method. This paper presents the results of research aimed at addressing mentioned shortcomings in image processing organization for machine vision systems which are used to color sorting of mineral samples. The principles of color analysis for low-contrast minerals by using machine vision systems are also studied. In addition, a special processing algorithm for color images of mineral samples is developed. Mentioned algorithm allows you to determine automatically the criteria of mineral samples separation based on an analysis of representative mineral samples. Experimental studies of the proposed algorithm

  1. WHIPPET: a collaborative software environment for medical image processing and analysis

    Science.gov (United States)

    Hu, Yangqiu; Haynor, David R.; Maravilla, Kenneth R.

    2007-03-01

    While there are many publicly available software packages for medical image processing, making them available to end users in clinical and research labs remains non-trivial. An even more challenging task is to mix these packages to form pipelines that meet specific needs seamlessly, because each piece of software usually has its own input/output formats, parameter sets, and so on. To address these issues, we are building WHIPPET (Washington Heterogeneous Image Processing Pipeline EnvironmenT), a collaborative platform for integrating image analysis tools from different sources. The central idea is to develop a set of Python scripts which glue the different packages together and make it possible to connect them in processing pipelines. To achieve this, an analysis is carried out for each candidate package for WHIPPET, describing input/output formats, parameters, ROI description methods, scripting and extensibility and classifying its compatibility with other WHIPPET components as image file level, scripting level, function extension level, or source code level. We then identify components that can be connected in a pipeline directly via image format conversion. We set up a TWiki server for web-based collaboration so that component analysis and task request can be performed online, as well as project tracking, knowledge base management, and technical support. Currently WHIPPET includes the FSL, MIPAV, FreeSurfer, BrainSuite, Measure, DTIQuery, and 3D Slicer software packages, and is expanding. Users have identified several needed task modules and we report on their implementation.

  2. Discrete Fourier analysis and wavelets applications to signal and image processing

    CERN Document Server

    Broughton, S Allen

    2008-01-01

    A thorough guide to the classical and contemporary mathematical methods of modern signal and image processing. Discrete Fourier Analysis and Wavelets presents a thorough introduction to the mathematical foundations of signal and image processing. Key concepts and applications are addressed in a thought-provoking manner and are implemented using vector, matrix, and linear algebra methods. With a balanced focus on mathematical theory and computational techniques, this self-contained book equips readers with the essential knowledge needed to transition smoothly from mathematical models to practic

  3. Experimental design and instability analysis of coaxial electrospray process for microencapsulation of drugs and imaging agents.

    Science.gov (United States)

    Si, Ting; Zhang, Leilei; Li, Guangbin; Roberts, Cynthia J; Yin, Xiezhen; Xu, Ronald

    2013-07-01

    Recent developments in multimodal imaging and image-guided therapy requires multilayered microparticles that encapsulate several imaging and therapeutic agents in the same carrier. However, commonly used microencapsulation processes have multiple limitations such as low encapsulation efficiency and loss of bioactivity for the encapsulated biological cargos. To overcome these limitations, we have carried out both experimental and theoretical studies on coaxial electrospray of multilayered microparticles. On the experimental side, an improved coaxial electrospray setup has been developed. A customized coaxial needle assembly combined with two ring electrodes has been used to enhance the stability of the cone and widen the process parameter range of the stable cone-jet mode. With this assembly, we have obtained poly(lactide-co-glycolide) microparticles with fine morphology and uniform size distribution. On the theoretical side, an instability analysis of the coaxial electrified jet has been performed based on the experimental parameters. The effects of process parameters on the formation of different unstable modes have been studied. The reported experimental and theoretical research represents a significant step toward quantitative control and optimization of the coaxial electrospray process for microencapsulation of multiple drugs and imaging agents in multimodal imaging and image-guided therapy.

  4. Image processing and classification procedures for analysis of sub-decimeter imagery acquired with an unmanned aircraft over arid rangelands

    Science.gov (United States)

    Using five centimeter resolution images acquired with an unmanned aircraft system (UAS), we developed and evaluated an image processing workflow that included the integration of resolution-appropriate field sampling, feature selection, object-based image analysis, and processing approaches for UAS i...

  5. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier

    2013-01-01

    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  6. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  7. A learning tool for optical and microwave satellite image processing and analysis

    Science.gov (United States)

    Dashondhi, Gaurav K.; Mohanty, Jyotirmoy; Eeti, Laxmi N.; Bhattacharya, Avik; De, Shaunak; Buddhiraju, Krishna M.

    2016-04-01

    This paper presents a self-learning tool, which contains a number of virtual experiments for processing and analysis of Optical/Infrared and Synthetic Aperture Radar (SAR) images. The tool is named Virtual Satellite Image Processing and Analysis Lab (v-SIPLAB) Experiments that are included in Learning Tool are related to: Optical/Infrared - Image and Edge enhancement, smoothing, PCT, vegetation indices, Mathematical Morphology, Accuracy Assessment, Supervised/Unsupervised classification etc.; Basic SAR - Parameter extraction and range spectrum estimation, Range compression, Doppler centroid estimation, Azimuth reference function generation and compression, Multilooking, image enhancement, texture analysis, edge and detection. etc.; SAR Interferometry - BaseLine Calculation, Extraction of single look SAR images, Registration, Resampling, and Interferogram generation; SAR Polarimetry - Conversion of AirSAR or Radarsat data to S2/C3/T3 matrix, Speckle Filtering, Power/Intensity image generation, Decomposition of S2/C3/T3, Classification of S2/C3/T3 using Wishart Classifier [3]. A professional quality polarimetric SAR software can be found at [8], a part of whose functionality can be found in our system. The learning tool also contains other modules, besides executable software experiments, such as aim, theory, procedure, interpretation, quizzes, link to additional reading material and user feedback. Students can have understanding of Optical and SAR remotely sensed images through discussion of basic principles and supported by structured procedure for running and interpreting the experiments. Quizzes for self-assessment and a provision for online feedback are also being provided to make this Learning tool self-contained. One can download results after performing experiments.

  8. MiToBo - A Toolbox for Image Processing and Analysis

    Directory of Open Access Journals (Sweden)

    Birgit Möller

    2016-04-01

    Full Text Available MiToBo is a toolbox and Java library for solving basic as well as advanced image processing and analysis tasks. It features a rich collection of fundamental, intermediate and high-level image processing operators and algorithms as well as a couple of sophisticated tools for specific biological and biomedical applications. These tools include operators for elucidating cellular morphology and locomotion as well as operators for the characterization of certain intracellular particles and structures. MiToBo builds upon and integrates into the widely-used image analysis software packages ImageJ and Fiji [11, 10], and all of its operators can easily be run in ImageJ and Fiji via a generic operator runner plugin. Alternatively MiToBo operators can directly be run from command line, and using its functionality as a library for developing own applications is also supported. Thanks to the Alida library [8] forming the base of MiToBo all operators share unified APIs fostering reusability, and graphical as well as command line user interfaces for operators are automatically generated. MiToBo is available from its website http://www.informatik.uni-halle.de/mitobo, on Github, via an Apache Archiva Maven repository server, and it can easily be activated in Fiji via its own update site.

  9. Retinal imaging and image analysis

    NARCIS (Netherlands)

    Abramoff, M.D.; Garvin, Mona K.; Sonka, Milan

    2010-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindne

  10. Parallel processing system for rapid analysis of speckle-photography and particle-image-velocimetry data.

    Science.gov (United States)

    Huntley, J M; Goldrein, H T; Benckert, L R

    1993-06-10

    An automated system has been constructed to process double-exposure speckle-photography and particle-image-velocimetry images. A 3 × 3 array of laser beams probes the photograph, forming nine fringe patterns in parallel; these are then analyzed sequentially by digital computer and the use of a two-dimensional Fourier-transform method. Results are presented showing that the random errors in the measured displacements from such a system approach the expected speckle-noise-limited performance, with a total analysis time per displacement vector of 160 ms.

  11. Image analysis and mathematical modelling for the supervision of the dough fermentation process

    Science.gov (United States)

    Zettel, Viktoria; Paquet-Durand, Olivier; Hecker, Florian; Hitzmann, Bernd

    2016-10-01

    The fermentation (proof) process of dough is one of the quality-determining steps in the production of baking goods. Beside the fluffiness, whose fundaments are built during fermentation, the flavour of the final product is influenced very much during this production stage. However, until now no on-line measurement system is available, which can supervise this important process step. In this investigation the potential of an image analysis system is evaluated, that enables the determination of the volume of fermented dough pieces. The camera is moving around the fermenting pieces and collects images from the objects by means of different angles (360° range). Using image analysis algorithms the volume increase of individual dough pieces is determined. Based on a detailed mathematical description of the volume increase, which based on the Bernoulli equation, carbon dioxide production rate of yeast cells and the diffusion processes of carbon dioxide, the fermentation process is supervised. Important process parameters, like the carbon dioxide production rate of the yeast cells and the dough viscosity can be estimated just after 300 s of proofing. The mean percentage error for forecasting the further evolution of the relative volume of the dough pieces is just 2.3 %. Therefore, a forecast of the further evolution can be performed and used for fault detection.

  12. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    Science.gov (United States)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  13. Microscopic method in processed animal proteins identification in feed: applications of image analysis

    Directory of Open Access Journals (Sweden)

    Savoini G

    2004-01-01

    Full Text Available Processed animal proteins (PAP detection and identification in feedstuffs can be difficult in distinguishing among land animals, i.e. poultry and mammals. Thus, the aim of this study was to evaluate the potential application of image analysis in PAP identification. For this purpose four reference samples containing poultry meals and four reference samples containing mammalian meat and bone meals were used. Each sample was analyzed using the microscopic method (98/88/EC. Bone fragments are characterized by similar morphological features (colours, shape, lacunae shape, lacunae distribution, etc. that make it diff i c u l t to distinguish between poultry and mammals. Through a digital camera and an image analysis software a total of 30 bone fragment lacunae images at X400 were obtained. For each image 29 geometric parameters related to the lacunae and 3 geometric parameters related to the canaliculae of lacunae, were measured using the image analysis software obtaining 960 observations. Of the 32 descriptors used two, the area of the lacunae and their perimeter, were able to explain 96.15% of the total variability of the data, even though their contribution was different (83.97% vs. 12.18%, respectively. Through these two descriptors it was possible to distinguish between mammalian and poultry lacunae, except in two cases (6.6%, in which poultry lacunae were wrongly classified as mammalian. This latter can be related with higher variability in the lacunae area recorded for mammals compared to poultry. On the basis of the present study, it can be concluded that image analysis represents a promising potential tool in PAP identification, that may provide accurate and reliable results in feedstuffs characterisation, analysis and control.

  14. Heuristic Analysis Model of Nitrided Layers' Formation Consisting of the Image Processing and Analysis and Elements of Artificial Intelligence.

    Science.gov (United States)

    Wójcicki, Tomasz; Nowicki, Michał

    2016-04-01

    The article presents a selected area of research and development concerning the methods of material analysis based on the automatic image recognition of the investigated metallographic sections. The objectives of the analyses of the materials for gas nitriding technology are described. The methods of the preparation of nitrided layers, the steps of the process and the construction and operation of devices for gas nitriding are given. We discuss the possibility of using the methods of digital images processing in the analysis of the materials, as well as their essential task groups: improving the quality of the images, segmentation, morphological transformations and image recognition. The developed analysis model of the nitrided layers formation, covering image processing and analysis techniques, as well as selected methods of artificial intelligence are presented. The model is divided into stages, which are formalized in order to better reproduce their actions. The validation of the presented method is performed. The advantages and limitations of the developed solution, as well as the possibilities of its practical use, are listed.

  15. Image Analysis Based on Soft Computing and Applied on Space Shuttle During the Liftoff Process

    Science.gov (United States)

    Dominquez, Jesus A.; Klinko, Steve J.

    2007-01-01

    Imaging techniques based on Soft Computing (SC) and developed at Kennedy Space Center (KSC) have been implemented on a variety of prototype applications related to the safety operation of the Space Shuttle during the liftoff process. These SC-based prototype applications include detection and tracking of moving Foreign Objects Debris (FOD) during the Space Shuttle liftoff, visual anomaly detection on slidewires used in the emergency egress system for the Space Shuttle at the laJlIlch pad, and visual detection of distant birds approaching the Space Shuttle launch pad. This SC-based image analysis capability developed at KSC was also used to analyze images acquired during the accident of the Space Shuttle Columbia and estimate the trajectory and velocity of the foam that caused the accident.

  16. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  17. Eye Redness Image Processing Techniques

    Science.gov (United States)

    Adnan, M. R. H. Mohd; Zain, Azlan Mohd; Haron, Habibollah; Alwee, Razana; Zulfaezal Che Azemin, Mohd; Osman Ibrahim, Ashraf

    2017-09-01

    The use of photographs for the assessment of ocular conditions has been suggested to further standardize clinical procedures. The selection of the photographs to be used as scale reference images was subjective. Numerous methods have been proposed to assign eye redness scores by computational methods. Image analysis techniques have been investigated over the last 20 years in an attempt to forgo subjective grading scales. Image segmentation is one of the most important and challenging problems in image processing. This paper briefly outlines the comprehensive of image processing and the implementation of image segmentation in eye redness.

  18. Progression Analysis and Stage Discovery in Continuous Physiological Processes Using Image Computing

    Directory of Open Access Journals (Sweden)

    Ferrucci Luigi

    2010-01-01

    Full Text Available We propose an image computing-based method for quantitative analysis of continuous physiological processes that can be sensed by medical imaging and demonstrate its application to the analysis of morphological alterations of the bone structure, which correlate with the progression of osteoarthritis (OA. The purpose of the analysis is to quantitatively estimate OA progression in a fashion that can assist in understanding the pathophysiology of the disease. Ultimately, the texture analysis will be able to provide an alternative OA scoring method, which can potentially reflect the progression of the disease in a more direct fashion compared to the existing clinically utilized classification schemes based on radiology. This method can be useful not just for studying the nature of OA, but also for developing and testing the effect of drugs and treatments. While in this paper we demonstrate the application of the method to osteoarthritis, its generality makes it suitable for the analysis of other progressive clinical conditions that can be diagnosed and prognosed by using medical imaging.

  19. An integrated calcium imaging processing toolbox for the analysis of neuronal population dynamics.

    Directory of Open Access Journals (Sweden)

    Sebastián A Romano

    2017-06-01

    Full Text Available The development of new imaging and optogenetics techniques to study the dynamics of large neuronal circuits is generating datasets of unprecedented volume and complexity, demanding the development of appropriate analysis tools. We present a comprehensive computational workflow for the analysis of neuronal population calcium dynamics. The toolbox includes newly developed algorithms and interactive tools for image pre-processing and segmentation, estimation of significant single-neuron single-trial signals, mapping event-related neuronal responses, detection of activity-correlated neuronal clusters, exploration of population dynamics, and analysis of clusters' features against surrogate control datasets. The modules are integrated in a modular and versatile processing pipeline, adaptable to different needs. The clustering module is capable of detecting flexible, dynamically activated neuronal assemblies, consistent with the distributed population coding of the brain. We demonstrate the suitability of the toolbox for a variety of calcium imaging datasets. The toolbox open-source code, a step-by-step tutorial and a case study dataset are available at https://github.com/zebrain-lab/Toolbox-Romano-et-al.

  20. An integrated calcium imaging processing toolbox for the analysis of neuronal population dynamics.

    Science.gov (United States)

    Romano, Sebastián A; Pérez-Schuster, Verónica; Jouary, Adrien; Boulanger-Weill, Jonathan; Candeo, Alessia; Pietri, Thomas; Sumbre, Germán

    2017-06-01

    The development of new imaging and optogenetics techniques to study the dynamics of large neuronal circuits is generating datasets of unprecedented volume and complexity, demanding the development of appropriate analysis tools. We present a comprehensive computational workflow for the analysis of neuronal population calcium dynamics. The toolbox includes newly developed algorithms and interactive tools for image pre-processing and segmentation, estimation of significant single-neuron single-trial signals, mapping event-related neuronal responses, detection of activity-correlated neuronal clusters, exploration of population dynamics, and analysis of clusters' features against surrogate control datasets. The modules are integrated in a modular and versatile processing pipeline, adaptable to different needs. The clustering module is capable of detecting flexible, dynamically activated neuronal assemblies, consistent with the distributed population coding of the brain. We demonstrate the suitability of the toolbox for a variety of calcium imaging datasets. The toolbox open-source code, a step-by-step tutorial and a case study dataset are available at https://github.com/zebrain-lab/Toolbox-Romano-et-al.

  1. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  2. Study of ionospheric anomalies due to impact of typhoon using Principal Component Analysis and image processing

    Indian Academy of Sciences (India)

    Jyh-Woei Lin

    2012-08-01

    Principal Component Analysis (PCA) and image processing are used to determine Total Electron Content (TEC) anomalies in the F-layer of the ionosphere relating to Typhoon Nakri for 29 May, 2008 (UTC). PCA and image processing are applied to the global ionospheric map (GIM) with transforms conducted for the time period 12:00–14:00 UT on 29 May, 2008 when the wind was most intense. Results show that at a height of approximately 150–200 km the TEC anomaly is highly localized; however, it becomes more intense and widespread with height. Potential causes of these results are discussed with emphasis given to acoustic gravity waves caused by wind force.

  3. Flight Performance Analysis of an Image Processing Algorithm for Integrated Sense-and-Avoid Systems

    Directory of Open Access Journals (Sweden)

    Lidia Forlenza

    2012-01-01

    Full Text Available This paper is focused on the development and the flight performance analysis of an image-processing technique aimed at detecting flying obstacles in airborne panchromatic images. It was developed within the framework of a research project which aims at realizing a prototypical obstacle detection and identification System, characterized by a hierarchical multisensor configuration. This configuration comprises a radar, that is, the main sensor, and four electro-optical cameras. Cameras are used as auxiliary sensors to the radar, in order to increase intruder aircraft position measurement, in terms of accuracy and data rate. The paper thoroughly describes the selection and customization of the developed image-processing techniques in order to guarantee the best results in terms of detection range, missed detection rate, and false-alarm rate. Performance is evaluated on the basis of a large amount of images gathered during flight tests with an intruder aircraft. The improvement in terms of accuracy and data rate, compared with radar-only tracking, is quantitatively demonstrated.

  4. Image Analysis

    DEFF Research Database (Denmark)

    The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development....... The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries...

  5. Automated analysis of heterogeneous carbon nanostructures by high-resolution electron microscopy and on-line image processing

    Energy Technology Data Exchange (ETDEWEB)

    Toth, P., E-mail: toth.pal@uni-miskolc.hu [Department of Chemical Engineering, University of Utah, 50 S. Central Campus Drive, Salt Lake City, UT 84112-9203 (United States); Farrer, J.K. [Department of Physics and Astronomy, Brigham Young University, N283 ESC, Provo, UT 84602 (United States); Palotas, A.B. [Department of Combustion Technology and Thermal Energy, University of Miskolc, H3515, Miskolc-Egyetemvaros (Hungary); Lighty, J.S.; Eddings, E.G. [Department of Chemical Engineering, University of Utah, 50 S. Central Campus Drive, Salt Lake City, UT 84112-9203 (United States)

    2013-06-15

    High-resolution electron microscopy is an efficient tool for characterizing heterogeneous nanostructures; however, currently the analysis is a laborious and time-consuming manual process. In order to be able to accurately and robustly quantify heterostructures, one must obtain a statistically high number of micrographs showing images of the appropriate sub-structures. The second step of analysis is usually the application of digital image processing techniques in order to extract meaningful structural descriptors from the acquired images. In this paper it will be shown that by applying on-line image processing and basic machine vision algorithms, it is possible to fully automate the image acquisition step; therefore, the number of acquired images in a given time can be increased drastically without the need for additional human labor. The proposed automation technique works by computing fields of structural descriptors in situ and thus outputs sets of the desired structural descriptors in real-time. The merits of the method are demonstrated by using combustion-generated black carbon samples. - Highlights: ► The HRTEM analysis of heterogeneous nanostructures is a tedious manual process. ► Automatic HRTEM image acquisition and analysis can improve data quantity and quality. ► We propose a method based on on-line image analysis for the automation of HRTEM image acquisition. ► The proposed method is demonstrated using HRTEM images of soot particles.

  6. Processing of CT images for analysis of diffuse lung disease in the lung tissue research consortium

    Science.gov (United States)

    Karwoski, Ronald A.; Bartholmai, Brian; Zavaletta, Vanessa A.; Holmes, David; Robb, Richard A.

    2008-03-01

    The goal of Lung Tissue Resource Consortium (LTRC) is to improve the management of diffuse lung diseases through a better understanding of the biology of Chronic Obstructive Pulmonary Disease (COPD) and fibrotic interstitial lung disease (ILD) including Idiopathic Pulmonary Fibrosis (IPF). Participants are subjected to a battery of tests including tissue biopsies, physiologic testing, clinical history reporting, and CT scanning of the chest. The LTRC is a repository from which investigators can request tissue specimens and test results as well as semi-quantitative radiology reports, pathology reports, and automated quantitative image analysis results from the CT scan data performed by the LTRC core laboratories. The LTRC Radiology Core Laboratory (RCL), in conjunction with the Biomedical Imaging Resource (BIR), has developed novel processing methods for comprehensive characterization of pulmonary processes on volumetric high-resolution CT scans to quantify how these diseases manifest in radiographic images. Specifically, the RCL has implemented a semi-automated method for segmenting the anatomical regions of the lungs and airways. In these anatomic regions, automated quantification of pathologic features of disease including emphysema volumes and tissue classification are performed using both threshold techniques and advanced texture measures to determine the extent and location of emphysema, ground glass opacities, "honeycombing" (HC) and "irregular linear" or "reticular" pulmonary infiltrates and normal lung. Wall thickness measurements of the trachea, and its branches to the 3 rd and limited 4 th order are also computed. The methods for processing, segmentation and quantification are described. The results are reviewed and verified by an expert radiologist following processing and stored in the public LTRC database for use by pulmonary researchers. To date, over 1200 CT scans have been processed by the RCL and the LTRC project is on target for recruitment of the

  7. Quantitative spray analysis of diesel fuel and its emulsions using digital image processing

    Directory of Open Access Journals (Sweden)

    Faik Ahmad Muneer El-Deen

    2015-01-01

    Full Text Available In the present work, an experimental investigation of spray atomization of different liquids has been carried out. An air-assist atomizer operating at low injection pressures valued (4 and 6 bar has been used to generate sprays of (diesel fuel, 5, 10, and 15% water-emulsified-diesel, respectively. A Photron-SA4 high speed camera has been used for spray imaging at 2000 fps. 20 time intervals (from 5 to 100 ms with 5 ms time difference are selected for analysis and comparison. Spray macroscopic characteristics (spray penetration, dispersion, cone angle, axial and dispersion velocities have been extracted by a proposed technique based on image processing using Matlab, where the maximum and minimum (horizontal and vertical boundaries of the spray are detected, from which the macroscopic spray characteristics are evaluated. The maximum error of this technique is (1.5% for diesel spray and a little bit higher for its emulsions.

  8. Analysis Of Usefulness Of Satellite Image Processing Methods For Investigations Of Cultural Heritage Resources

    Science.gov (United States)

    Osińska-Skotak, Katarzyna; Zapłata, Rafał

    2015-12-01

    The paper presents the analysis of usefulness of WorldView-2 satellite image processing, which enhance information concerning the cultural heritage objects. WorldView-2 images are characterised by the very high spatial resolution and high spectral resolution; that is why they create new possibilities for many applications, including investigations of the cultural heritage. The vicinities of Iłża have been selected as the test site for presented investigations. The presented results of works are the effect of research works, which were performed in the frames of the scientific project "Utilisation of laser scanning and remote sensing in protection, investigations and inventory of the cultural heritage. Development of non-invasive, digital methods of documenting and recognising the architectural and archaeological heritage", as the part of "The National Programme for the Development of Humanities" of the Minister of Science and Higher Education in the period of 2012-2015.

  9. Neural image analysis in the process of quality assessment: domestic pig oocytes

    Science.gov (United States)

    Boniecki, P.; Przybył, J.; Kuzimska, T.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.; Zaborowicz, M.; Koszela, K.

    2014-04-01

    The questions related to quality classification of animal oocytes are explored by numerous scientific and research centres. This research is important, particularly in the context of improving the breeding value of farm animals. The methods leading to the stimulation of normal development of a larger number of fertilised animal oocytes in extracorporeal conditions are of special importance. Growing interest in the techniques of supported reproduction resulted in searching for new, increasingly effective methods for quality assessment of mammalian gametes and embryos. Progress in the production of in vitro animal embryos in fact depends on proper classification of obtained oocytes. The aim of this paper was the development of an original method for quality assessment of oocytes, performed on the basis of their graphical presentation in the form of microscopic digital images. The classification process was implemented on the basis of the information coded in the form of microphotographic pictures of the oocytes of domestic pig, using the modern methods of neural image analysis.

  10. CAVASS: a computer-assisted visualization and analysis software system - image processing aspects

    Science.gov (United States)

    Udupa, Jayaram K.; Grevera, George J.; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Mishra, Shipra; Iwanaga, Tad

    2007-03-01

    The development of the concepts within 3DVIEWNIX and of the software system 3DVIEWNIX itself dates back to the 1970s. Since then, a series of software packages for Computer Assisted Visualization and Analysis (CAVA) of images came out from our group, 3DVIEWNIX released in 1993, being the most recent, and all were distributed with source code. CAVASS, an open source system, is the latest in this series, and represents the next major incarnation of 3DVIEWNIX. It incorporates four groups of operations: IMAGE PROCESSING (including ROI, interpolation, filtering, segmentation, registration, morphological, and algebraic operations), VISUALIZATION (including slice display, reslicing, MIP, surface rendering, and volume rendering), MANIPULATION (for modifying structures and surgery simulation), ANALYSIS (various ways of extracting quantitative information). CAVASS is designed to work on all platforms. Its key features are: (1) most major CAVA operations incorporated; (2) very efficient algorithms and their highly efficient implementations; (3) parallelized algorithms for computationally intensive operations; (4) parallel implementation via distributed computing on a cluster of PCs; (5) interface to other systems such as CAD/CAM software, ITK, and statistical packages; (6) easy to use GUI. In this paper, we focus on the image processing operations and compare the performance of CAVASS with that of ITK. Our conclusions based on assessing performance by utilizing a regular (6 MB), large (241 MB), and a super (873 MB) 3D image data set are as follows: CAVASS is considerably more efficient than ITK, especially in those operations which are computationally intensive. It can handle considerably larger data sets than ITK. It is easy and ready to use in applications since it provides an easy to use GUI. The users can easily build a cluster from ordinary inexpensive PCs and reap the full power of CAVASS inexpensively compared to expensive multiprocessing systems which are less

  11. Spatial analysis of ambient gamma dose equivalent rate data by means of digital image processing techniques.

    Science.gov (United States)

    Szabó, Katalin Zsuzsanna; Jordan, Gyozo; Petrik, Attila; Horváth, Ákos; Szabó, Csaba

    2017-01-01

    A detailed ambient gamma dose equivalent rate mapping based on field measurements at ground level and at 1 m height was carried out at 142 sites in 80 × 90 km area in Pest County, Hungary. Detailed digital image processing analysis was carried out to identify and characterise spatial features such as outlying points, anomalous zones and linear edges in a smoothed TIN interpolated surface. The applied method proceeds from the simple shaded relief model and digital cross-sections to the more complex gradient magnitude and gradient direction maps, 2nd derivative profile curvature map, relief map and lineament density map. Each map is analysed for statistical characteristics and histogram-based image segmentation is used to delineate areas homogeneous with respect to the parameter values in these maps. Assessment of spatial anisotropy is implemented by 2D autocorrelogram and directional variogram analyses. The identified spatial features are related to underlying geological and tectonic conditions using GIS technology. Results show that detailed digital image processing is efficient in revealing the pattern present in field-measured ambient gamma dose equivalent rates and they are related to regional scale tectonic zones and surface sedimentary lithological conditions in the study area.

  12. A method for the automated processing and analysis of images of ULVWF-platelet strings.

    Science.gov (United States)

    Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V

    2013-01-01

    We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.

  13. Image processing mini manual

    Science.gov (United States)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  14. Image Processing Software

    Science.gov (United States)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  15. On-Line GIS Analysis and Image Processing for Geoportal Kielce/poland Development

    Science.gov (United States)

    Hejmanowska, B.; Głowienka, E.; Florek-Paszkowski, R.

    2016-06-01

    GIS databases are widely available on the Internet, but mainly for visualization with limited functionality; very simple queries are possible i.e. attribute query, coordinate readout, line and area measurements or pathfinder. A little more complex analysis (i.e. buffering or intersection) are rare offered. Paper aims at the concept of Geoportal functionality development in the field of GIS analysis. Multi-Criteria Evaluation (MCE) is planned to be implemented in web application. OGC Service is used for data acquisition from the server and results visualization. Advanced GIS analysis is planned in PostGIS and Python programming. In the paper an example of MCE analysis basing on Geoportal Kielce is presented. Other field where Geoportal can be developed is implementation of processing new available satellite images free of charge (Sentinel-2, Landsat 8, ASTER, WV-2). Now we are witnessing a revolution in access to the satellite imagery without charge. This should result in an increase of interest in the use of these data in various fields by a larger number of users, not necessarily specialists in remote sensing. Therefore, it seems reasonable to expand the functionality of Internet's tools for data processing by non-specialists, by automating data collection and prepared predefined analysis.

  16. Observing hydrological processes: recent advancements in surface flow monitoring through image analysis

    Science.gov (United States)

    Tauro, Flavia; Grimaldi, Salvatore

    2017-04-01

    Recently, several efforts have been devoted to the design and development of innovative, and often unintended, approaches for the acquisition of hydrological data. Among such pioneering techniques, this presentation reports recent advancements towards the establishment of a novel noninvasive and potentially continuous methodology based on the acquisition and analysis of images for spatially distributed observations of the kinematics of surface waters. The approach aims at enabling rapid, affordable, and accurate surface flow monitoring of natural streams. Flow monitoring is an integral part of hydrological sciences and is essential for disaster risk reduction and the comprehension of natural phenomena. However, water processes are inherently complex to observe: they are characterized by multiscale and highly heterogeneous phenomena which have traditionally demanded sophisticated and costly measurement techniques. Challenges in the implementation of such techniques have also resulted in lack of hydrological data during extreme events, in difficult-to-access environments, and at high temporal resolution. By combining low-cost yet high-resolution images and several velocimetry algorithms, noninvasive flow monitoring has been successfully conducted at highly heterogeneous scales, spanning from rills to highly turbulent streams, and medium-scale rivers, with minimal supervision by external users. Noninvasive image data acquisition has also afforded observations in high flow conditions. Latest novelties towards continuous flow monitoring at the catchment scale have entailed the development of a remote gauge-cam station on the Tiber River and integration of flow monitoring through image analysis with unmanned aerial systems (UASs) technology. The gauge-cam station and the UAS platform both afford noninvasive image acquisition and calibration through an innovative laser-based setup. Compared to traditional point-based instrumentation, images allow for generating surface

  17. An Analysis of OpenACC Programming Model: Image Processing Algorithms as a Case Study

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2014-06-01

    Full Text Available Graphics processing units and similar accelerators have been intensively used in general purpose computations for several years. In the last decade, GPU architecture and organization changed dramatically to support an ever-increasing demand for computing power. Along with changes in hardware, novel programming models have been proposed, such as NVIDIA’s Compute Unified Device Architecture (CUDA and Open Computing Language (OpenCL by Khronos group. Although numerous commercial and scientific applications have been developed using these two models, they still impose a significant challenge for less experienced users. There are users from various scientific and engineering communities who would like to speed up their applications without the need to deeply understand a low-level programming model and underlying hardware. In 2011, OpenACC programming model was launched. Much like OpenMP for multicore processors, OpenACC is a high-level, directive-based programming model for manycore processors like GPUs. This paper presents an analysis of OpenACC programming model and its applicability in typical domains like image processing. Three, simple image processing algorithms have been implemented for execution on the GPU with OpenACC. The results were compared with their sequential counterparts, and results are briefly discussed.

  18. FIELD GROUND TRUTHING DATA COLLECTOR – A MOBILE TOOLKIT FOR IMAGE ANALYSIS AND PROCESSING

    Directory of Open Access Journals (Sweden)

    X. Meng

    2012-07-01

    Full Text Available Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1 Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use and health conditions of ecosystems and environments in the vicinity of the flight field; 2 Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3 Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.

  19. Image Analysis on Detachment Process of Dust Cake on Ceramic Candle Filter

    Institute of Scientific and Technical Information of China (English)

    姬忠礼; 焦海青; 陈鸿海

    2005-01-01

    Based on the analysis of high-speed video images, the detachment behavior of dust cake from the ceramic candle filter surface during pulse cleaning process is investigated. The influences of the dust cake loading,the reservoir pressure, and the filtration velocity on the cleaning effectiveness are analyzed. Experimental results show that there exists an optimum dust cake thickness for pulse-cleaning process. For thin dust cake, the patchy cleaning exists and the cleaning efficiency is low; if the dust cake is too thick, the pressure drop across the dust cake becomes higher and a higher reservoir pressure may be needed. At the same time there also exists an optimum reservoir pressure for a given filtration condition.

  20. Singular Spectrum Analysis: A Note on Data Processing for Fourier Transform Hyperspectral Imagers.

    Science.gov (United States)

    Rafert, J Bruce; Zabalza, Jaime; Marshall, Stephen; Ren, Jinchang

    2016-09-01

    Hyperspectral remote sensing is experiencing a dazzling proliferation of new sensors, platforms, systems, and applications with the introduction of novel, low-cost, low-weight sensors. Curiously, relatively little development is now occurring in the use of Fourier transform (FT) systems, which have the potential to operate at extremely high throughput without use of a slit or reductions in both spatial and spectral resolution that thin film based mosaic sensors introduce. This study introduces a new physics-based analytical framework called singular spectrum analysis (SSA) to process raw hyperspectral imagery collected with FT imagers that addresses some of the data processing issues associated with the use of the inverse FT. Synthetic interferogram data are analyzed using SSA, which adaptively decomposes the original synthetic interferogram into several independent components associated with the signal, photon and system noise, and the field illumination pattern.

  1. Analysis of coatings appearance and durability testing induced surface defects using image capture/processing/analysis

    Directory of Open Access Journals (Sweden)

    Lee, F.

    2003-12-01

    Full Text Available There are no established and accepted techniques available for accurate characterization appearance changes brought about by scratch and mar damage. Scratch and mar resistance is related to the ability of a coating in resisting deformation. The appearance change is brought about by surface roughening which in turn leads to a reduction in gloss and reflectivity. This paper focuses on the measurement of the appearance of coating by image analysis and gloss measurement.

    No hay técnicas establecidas o aceptadas para una caracterización precisa de los cambios de apariencia dados por los rayones profundos y daños superficiales en los recubrimientos. La resistencia a estos eventos está relacionada con la habilidad del recubrimiento a resistir la deformación. El cambio de apariencia se presenta en la superficie como una aspereza que va llevando a la reducción del brillo y de la reflectancia. Este trabajo se centra en las mediciones de apariencia de un recubrimiento por análisis de imágenes y medición de brillo.

  2. Image processing analysis of vortex dynamics of lobed jets from three-dimensional diffusers

    Energy Technology Data Exchange (ETDEWEB)

    Nastase, Ilinca [Technical University of Civil Engineering in Bucharest, Building Services Department, 66 Avenue Pache Protopopescu, 020396, Bucharest (Romania); Meslem, Amina; El Hassan, Mouhammad, E-mail: inastase@instal.utcb.ro, E-mail: ameslem@univ-lr.fr [LEPTIAB, University of La Rochelle, Pole Sciences et Technologie, avenue Michel Crepeau, 17042 La Rochelle (France)

    2011-12-01

    The passive control of jet flows with the aim to enhance mixing and entrainment is of wide practical interest. Our purpose here is to develop new air diffusers for heating ventilating air conditioning systems by using lobed geometry nozzles, in order to ameliorate the users' thermal comfort. Two turbulent six-lobed air jets, issued from a lobed tubular nozzle and an innovative hemispherical lobed nozzle, were studied experimentally. It was shown that the proposed innovative concept of a lobed jet, which can be easily integrated in air diffusion devices, is very efficient regarding induction capability. A vortical dynamics analysis for the two jets is performed using a new method of image processing, namely dynamic mode decomposition. A validation of this method is also proposed suggesting that the dynamical mode decomposition (DMD) image processing method succeeds in capturing the most dominant frequencies of the flow dynamics, which in our case are related to the quite special dynamics of the Kelvin-Helmholtz vortices.

  3. Digital geometry in image processing

    CERN Document Server

    Mukhopadhyay, Jayanta

    2013-01-01

    Exploring theories and applications developed during the last 30 years, Digital Geometry in Image Processing presents a mathematical treatment of the properties of digital metric spaces and their relevance in analyzing shapes in two and three dimensions. Unlike similar books, this one connects the two areas of image processing and digital geometry, highlighting important results of digital geometry that are currently used in image analysis and processing. The book discusses different digital geometries in multi-dimensional integral coordinate spaces. It also describes interesting properties of

  4. Image processing for optical mapping.

    Science.gov (United States)

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  5. Semi-automated image processing system for micro- to macro-scale analysis of immunohistopathology: application to ischemic brain tissue.

    Science.gov (United States)

    Wu, Chunyan; Zhao, Weizhao; Lin, Baowan; Ginsberg, Myron D

    2005-04-01

    Immunochemical staining techniques are commonly used to assess neuronal, astrocytic and microglial alterations in experimental neuroscience research, and in particular, are applied to tissues from animals subjected to ischemic stroke. Immunoreactivity of brain sections can be measured from digitized immunohistology slides so that quantitative assessment can be carried out by computer-assisted analysis. Conventional methods of analyzing immunohistology are based on image classification techniques applied to a specific anatomic location at high magnification. Such micro-scale localized image analysis limits one for further correlative studies with other imaging modalities on whole brain sections, which are of particular interest in experimental stroke research. This report presents a semi-automated image analysis method that performs convolution-based image classification on micro-scale images, extracts numerical data representing positive immunoreactivity from the processed micro-scale images and creates a corresponding quantitative macro-scale image. The present method utilizes several image-processing techniques to cope with variances in intensity distribution, as well as artifacts caused by light scattering or heterogeneity of antigen expression, which are commonly encountered in immunohistology. Micro-scale images are composed by a tiling function in a mosaic manner. Image classification is accomplished by the K-means clustering method at the relatively low-magnification micro-scale level in order to increase computation efficiency. The quantitative macro-scale image is suitable for correlative analysis with other imaging modalities. This method was applied to different immunostaining antibodies, such as endothelial barrier antigen (EBA), lectin, and glial fibrillary acidic protein (GFAP), on histology slides from animals subjected to middle cerebral artery occlusion by the intraluminal suture method. Reliability tests show that the results obtained from

  6. Acoustic image-processing software

    Science.gov (United States)

    Several algorithims that display, enhance and analyze side-scan sonar images of the seafloor, have been developed by the University of Washington, Seattle, as part of an Office of Naval Research funded program in acoustic image analysis. One of these programs, PORTAL, is a small (less than 100K) image display and enhancement program that can run on MS-DOS computers with VGA boards. This program is now available in the public domain for general use in acoustic image processing.PORTAL is designed to display side-scan sonar data that is stored in most standard formats, including SeaMARC I, II, 150 and GLORIA data. (See image.) In addition to the “standard” formats, PORTAL has a module “front end” that allows the user to modify the program to accept other image formats. In addition to side-scan sonar data, the program can also display digital optical images from scanners and “framegrabbers,” gridded bathymetry data from Sea Beam and other sources, and potential field (magnetics/gravity) data. While limited in image analysis capability, the program allows image enhancement by histogram manipulation, and basic filtering operations, including multistage filtering. PORTAL can print reasonably high-quality images on Postscript laser printers and lower-quality images on non-Postscript printers with HP Laserjet emulation. Images suitable only for index sheets are also possible on dot matrix printers.

  7. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    Science.gov (United States)

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  8. Parallel processing and image analysis in the eyes of mantis shrimps.

    Science.gov (United States)

    Cronin, T W; Marshall, J

    2001-04-01

    The compound eyes of mantis shrimps, a group of tropical marine crustaceans, incorporate principles of serial and parallel processing of visual information that may be applicable to artificial imaging systems. Their eyes include numerous specializations for analysis of the spectral and polarizational properties of light, and include more photoreceptor classes for analysis of ultraviolet light, color, and polarization than occur in any other known visual system. This is possible because receptors in different regions of the eye are anatomically diverse and incorporate unusual structural features, such as spectral filters, not seen in other compound eyes. Unlike eyes of most other animals, eyes of mantis shrimps must move to acquire some types of visual information and to integrate color and polarization with spatial vision. Information leaving the retina appears to be processed into numerous parallel data streams leading into the central nervous system, greatly reducing the analytical requirements at higher levels. Many of these unusual features of mantis shrimp vision may inspire new sensor designs for machine vision.

  9. ECT Image Analysis Methods for Shear Zone Measurements during Silo Discharging Process

    Institute of Scientific and Technical Information of China (English)

    Krzysztof Grudzien; Zbigniew Chaniecki; Andrzej Romanowski; Maciej Niedostatkiewicz; Dominik Sankowski

    2012-01-01

    The paper covers the electrical capacitance tomography(ECT) data analysis on shear zones formed during silo discharging process.This is due to the ECT aptitude for detection of slight changes of material concentration.On the basis of ECT visualisations,wall-adjacent shear zone profiles are analysed for different wall roughness parameters.The analysis on changes of material concentration,based on ECT images,enables the calculation for the characteristic parameters of shear zones-size and material concentration inside the shear zone in a dynamic process of silo discharging.In order to verify the methodology a series of experiments on gravitational flow of bulk solids under various conditions were conducted with different initial granular material packing densities and silo wall roughness.The investigation shows that the increase in container wall roughness is an effective method for reducing the dynamic effects during the material discharging,since these effects are resulted from the resonance between hopper construction and trembling material.Such effects will damage industrial equipment in practical applications and need further investigation.

  10. Post-Disaster Image Processing for Damage Analysis Using GENESI-DR, WPS and Grid Computing

    Directory of Open Access Journals (Sweden)

    Marco Pappalardo

    2011-06-01

    Full Text Available The goal of the two year Ground European Network for Earth Science Interoperations-Digital Repositories (GENESI-DR project was to build an open and seamless access service to Earth science digital repositories for European and world-wide science users. In order to showcase GENESI-DR, one of the developed technology demonstrators focused on fast search, discovery, and access to remotely sensed imagery in the context of post-disaster building damage assessment. This paper describes the scenario and implementation details of the technology demonstrator, which was developed to support post-disaster damage assessment analyst activities. Once a disaster alert has been issued, response time is critical to providing relevant damage information to analysts and/or stakeholders. The presented technology demonstrator validates the GENESI-DR project data search, discovery and security infrastructure and integrates the rapid urban area mapping and the near real-time orthorectification web processing services to support a post-disaster damage needs assessment analysis scenario. It also demonstrates how the GENESI-DR SOA can be linked to web processing services that access grid computing resources for fast image processing and use secure communication to ensure confidentiality of information.

  11. The image processing handbook

    CERN Document Server

    Russ, John C

    2006-01-01

    Now in its fifth edition, John C. Russ's monumental image processing reference is an even more complete, modern, and hands-on tool than ever before. The Image Processing Handbook, Fifth Edition is fully updated and expanded to reflect the latest developments in the field. Written by an expert with unequalled experience and authority, it offers clear guidance on how to create, select, and use the most appropriate algorithms for a specific application. What's new in the Fifth Edition? ·       A new chapter on the human visual process that explains which visual cues elicit a response from the vie

  12. A Practical Approach to Quantitative Processing and Analysis of Small Biological Structures by Fluorescent Imaging

    Science.gov (United States)

    Noller, Crystal M.; Boulina, Maria; McNamara, George; Szeto, Angela; McCabe, Philip M.

    2016-01-01

    Standards in quantitative fluorescent imaging are vaguely recognized and receive insufficient discussion. A common best practice is to acquire images at Nyquist rate, where highest signal frequency is assumed to be the highest obtainable resolution of the imaging system. However, this particular standard is set to insure that all obtainable information is being collected. The objective of the current study was to demonstrate that for quantification purposes, these correctly set acquisition rates can be redundant; instead, linear size of the objects of interest can be used to calculate sufficient information density in the image. We describe optimized image acquisition parameters and unbiased methods for processing and quantification of medium-size cellular structures. Sections of rabbit aortas were immunohistochemically stained to identify and quantify sympathetic varicosities, >2 μm in diameter. Images were processed to reduce background noise and segment objects using free, open-access software. Calculations of the optimal sampling rate for the experiment were based on the size of the objects of interest. The effect of differing sampling rates and processing techniques on object quantification was demonstrated. Oversampling led to a substantial increase in file size, whereas undersampling hindered reliable quantification. Quantification of raw and incorrectly processed images generated false structures, misrepresenting the underlying data. The current study emphasizes the importance of defining image-acquisition parameters based on the structure(s) of interest. The proposed postacquisition processing steps effectively removed background and noise, allowed for reliable quantification, and eliminated user bias. This customizable, reliable method for background subtraction and structure quantification provides a reproducible tool for researchers across biologic disciplines. PMID:27182204

  13. Image processing occupancy sensor

    Science.gov (United States)

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  14. Scilab and SIP for Image Processing

    CERN Document Server

    Fabbri, Ricardo; Costa, Luciano da Fontoura

    2012-01-01

    This paper is an overview of Image Processing and Analysis using Scilab, a free prototyping environment for numerical calculations similar to Matlab. We demonstrate the capabilities of SIP -- the Scilab Image Processing Toolbox -- which extends Scilab with many functions to read and write images in over 100 major file formats, including PNG, JPEG, BMP, and TIFF. It also provides routines for image filtering, edge detection, blurring, segmentation, shape analysis, and image recognition. Basic directions to install Scilab and SIP are given, and also a mini-tutorial on Scilab. Three practical examples of image analysis are presented, in increasing degrees of complexity, showing how advanced image analysis techniques seems uncomplicated in this environment.

  15. Image processing for electron microscopy single-particle analysis using XMIPP.

    Science.gov (United States)

    Scheres, Sjors H W; Núñez-Ramírez, Rafael; Sorzano, Carlos O S; Carazo, José María; Marabini, Roberto

    2008-01-01

    We describe a collection of standardized image processing protocols for electron microscopy single-particle analysis using the XMIPP software package. These protocols allow performing the entire processing workflow starting from digitized micrographs up to the final refinement and evaluation of 3D models. A particular emphasis has been placed on the treatment of structurally heterogeneous data through maximum-likelihood refinements and self-organizing maps as well as the generation of initial 3D models for such data sets through random conical tilt reconstruction methods. All protocols presented have been implemented as stand-alone, executable python scripts, for which a dedicated graphical user interface has been developed. Thereby, they may provide novice users with a convenient tool to quickly obtain useful results with minimum efforts in learning about the details of this comprehensive package. Examples of applications are presented for a negative stain random conical tilt data set on the hexameric helicase G40P and for a structurally heterogeneous data set on 70S Escherichia coli ribosomes embedded in vitrified ice.

  16. Assessment of hydrocephalus in children based on digital image processing and analysis

    Directory of Open Access Journals (Sweden)

    Fabijańska Anna

    2014-06-01

    Full Text Available Hydrocephalus is a pathological condition of the central nervous system which often affects neonates and young children. It manifests itself as an abnormal accumulation of cerebrospinal fluid within the ventricular system of the brain with its subsequent progression. One of the most important diagnostic methods of identifying hydrocephalus is Computer Tomography (CT. The enlarged ventricular system is clearly visible on CT scans. However, the assessment of the disease progress usually relies on the radiologist’s judgment and manual measurements, which are subjective, cumbersome and have limited accuracy. Therefore, this paper regards the problem of semi-automatic assessment of hydrocephalus using image processing and analysis algorithms. In particular, automated determination of popular indices of the disease progress is considered. Algorithms for the detection, semi-automatic segmentation and numerical description of the lesion are proposed. Specifically, the disease progress is determined using shape analysis algorithms. Numerical results provided by the introduced methods are presented and compared with those calculated manually by a radiologist and a trained operator. The comparison proves the correctness of the introduced approach.

  17. Quantum image processing?

    Science.gov (United States)

    Mastriani, Mario

    2017-01-01

    This paper presents a number of problems concerning the practical (real) implementation of the techniques known as quantum image processing. The most serious problem is the recovery of the outcomes after the quantum measurement, which will be demonstrated in this work that is equivalent to a noise measurement, and it is not considered in the literature on the subject. It is noteworthy that this is due to several factors: (1) a classical algorithm that uses Dirac's notation and then it is coded in MATLAB does not constitute a quantum algorithm, (2) the literature emphasizes the internal representation of the image but says nothing about the classical-to-quantum and quantum-to-classical interfaces and how these are affected by decoherence, (3) the literature does not mention how to implement in a practical way (at the laboratory) these proposals internal representations, (4) given that quantum image processing works with generic qubits, this requires measurements in all axes of the Bloch sphere, logically, and (5) among others. In return, the technique known as quantum Boolean image processing is mentioned, which works with computational basis states (CBS), exclusively. This methodology allows us to avoid the problem of quantum measurement, which alters the results of the measured except in the case of CBS. Said so far is extended to quantum algorithms outside image processing too.

  18. Performance analysis of massively parallel embedded hardware architectures for retinal image processing

    OpenAIRE

    Osorio Roberto; Nieto Alejandro; Brea Victor; Vilariño David

    2011-01-01

    Abstract This paper examines the implementation of a retinal vessel tree extraction technique on different hardware platforms and architectures. Retinal vessel tree extraction is a representative application of those found in the domain of medical image processing. The low signal-to-noise ratio of the images leads to a large amount of low-level tasks in order to meet the accuracy requirements. In some applications, this might compromise computing speed. This paper is focused on the assessment...

  19. Image post-processing in dental practice.

    Science.gov (United States)

    Gormez, Ozlem; Yilmaz, Hasan Huseyin

    2009-10-01

    Image post-processing of dental digital radiographs, a function which used commonly in dental practice is presented in this article. Digital radiography has been available in dentistry for more than 25 years and its use by dental practitioners is steadily increasing. Digital acquisition of radiographs enables computer-based image post-processing to enhance image quality and increase the accuracy of interpretation. Image post-processing applications can easily be practiced in dental office by a computer and image processing programs. In this article, image post-processing operations such as image restoration, image enhancement, image analysis, image synthesis, and image compression, and their diagnostic efficacy is described. In addition this article provides general dental practitioners with a broad overview of the benefits of the different image post-processing operations to help them understand the role of that the technology can play in their practices.

  20. Local study of defects during sintering of UO2: image processing and quantitative analysis tools:

    OpenAIRE

    Eric Girard; Jean-Marc Chaix; François Valdivieso; Patrice Goeuriot; Jacques Lechelle

    2008-01-01

    This paper describes the image analysis tools developed and used to quantify the local effects, of heterogeneities during sintering of ceramic materials applied in nuclear fuels. Specific materials, containing a controlled dispersion of well defined heterogeneities (dense or porous aggregates) in the ceramics matrix have been prepared and sintered. In order to characterize the materials in the vicinity of these likely isolated heterogeneities, large SEM images are first acquired around hetero...

  1. Analysis of grid performance using an optical flow algorithm for medical image processing

    Science.gov (United States)

    Moreno, Ramon A.; Cunha, Rita de Cássio Porfírio; Gutierrez, Marco A.

    2014-03-01

    The development of bigger and faster computers has not yet provided the computing power for medical image processing required nowadays. This is the result of several factors, including: i) the increasing number of qualified medical image users requiring sophisticated tools; ii) the demand for more performance and quality of results; iii) researchers are addressing problems that were previously considered extremely difficult to achieve; iv) medical images are produced with higher resolution and on a larger number. These factors lead to the need of exploring computing techniques that can boost the computational power of Healthcare Institutions while maintaining a relative low cost. Parallel computing is one of the approaches that can help solving this problem. Parallel computing can be achieved using multi-core processors, multiple processors, Graphical Processing Units (GPU), clusters or Grids. In order to gain the maximum benefit of parallel computing it is necessary to write specific programs for each environment or divide the data in smaller subsets. In this article we evaluate the performance of the two parallel computing tools when dealing with a medical image processing application. We compared the performance of the EELA-2 (E-science grid facility for Europe and Latin- America) grid infrastructure with a small Cluster (3 nodes x 8 cores = 24 cores) and a regular PC (Intel i3 - 2 cores). As expected the grid had a better performance for a large number of processes, the cluster for a small to medium number of processes and the PC for few processes.

  2. Image processing of 2D crystal images.

    Science.gov (United States)

    Arheit, Marcel; Castaño-Díez, Daniel; Thierry, Raphaël; Gipson, Bryant R; Zeng, Xiangyan; Stahlberg, Henning

    2013-01-01

    Electron crystallography of membrane proteins uses cryo-transmission electron microscopy to image frozen-hydrated 2D crystals. The processing of recorded images exploits the periodic arrangement of the structures in the images to extract the amplitudes and phases of diffraction spots in Fourier space. However, image imperfections require a crystal unbending procedure to be applied to the image before evaluation in Fourier space. We here describe the process of 2D crystal image unbending, using the 2dx software system.

  3. Spatial Distribution Analysis of Soil Properties in Varzaneh Region of Isfahan Using Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    F. Mahmoodi

    2016-02-01

    annual evaporation rate is 3265 mm. In this study, image processing techniquess including band combinations, Principal Component Analysis (PC1, PC2 and PC3, and classification were applied to a TM image to map different soil properties. In order to prepare the satellite image, geometric correction was performed. A 1:25,000 map (UTM 39 was used as a base to georegister the Landsat image. 40 Ground Control Points (GCPs were selected throughout the map and image. Road intersections or other man-made features were appropriate targets for this purpose. The raw image was transformed to the georectified image using a first order polynomial, and then resampled using the nearest neighbour method to preserve radiometry. The final Root Mean Square (RMS error for the selected points was 0.3 pixels. To establish relationships between image and field data, stratified random sampling techniques were used to collect 53 soil samples at the GPS (Global Positioning System points. The continuous map of soil properties was achieved using simple and multiple linear regression models by averaging 9 image pixels around sampling sites. Different image spectral indices were used as independent variables and the dependent variables were field- based data. Results and Discussion: The results of multiple regression analysis showed that the strongest relationships was between sandy soil and TM bands 1, 2, 3, 4, and 5, explaining up to 83% of variation in this component. The weakest relationship was found between CaCo3 and 3, 5, and 7 TM bands. In some cases, the multiple regressions was not an appropriate predicting model of soil properties, therefore, the TM and PC bands that had the highest relationship with field data (confidence level, 99% based on simple regression were classified by the maximum likelihood algorithm. According to error matrix, the overall accuracy of classified maps was between 85 and 93% for chlorine (Cl and silt componets, repectively. Conclusions: The results indicated that

  4. Organoleptic damage classification of potatoes with the use of image analysis in production process

    Science.gov (United States)

    Przybył, K.; Zaborowicz, M.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.

    2014-04-01

    In the agro-food sector security it is required the safety of a healthy food. Therefore, the farms are inspected by the quality standards of production in all sectors of production. Farms must meet the requirements dictated by the legal regulations in force in the European Union. Currently, manufacturers are seeking to make their food products have become unbeatable. This gives you the chance to form their own brand on the market. In addition, they use technologies that can increase the scale of production. Moreover, in the manufacturing process they tend to maintain a high level of quality of their products. Potatoes may be included in this group of agricultural products. Potatoes have become one of the major and popular edible plants. Globally, potatoes are used for consumption at 60%, Poland 40%. This is due to primarily advantages, consumer and nutritional qualities. Potatoes are easy to digest. Medium sized potato bigger than 60 mm in diameter contains only about 60 calories and very little fat. Moreover, it is the source of many vitamins such as vitamin C, vitamin B1, vitamin B2, vitamin E, etc. [1]. The parameters of quality consumer form, called organoleptic sensory properties, are evaluated by means of sensory organs by using the point method. The most important are: flavor, flesh color, darkening of the tuber flesh when raw and after cooking. In the production process it is important to adequate, relevant and accurate preparing potatoes for use and sale. Evaluation of the quality of potatoes is determined on the basis of organoleptic quality standards for potatoes. Therefore, there is a need to automate this process. To do this, use the appropriate tools, image analysis and classification models using artificial neural networks that will help assess the quality of potatoes [2, 3, 4].

  5. Image Processing for Teaching.

    Science.gov (United States)

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  6. Automated image analysis with the potential for process quality control applications in stem cell maintenance and differentiation.

    Science.gov (United States)

    Smith, David; Glen, Katie; Thomas, Robert

    2016-01-01

    The translation of laboratory processes into scaled production systems suitable for manufacture is a significant challenge for cell based therapies; in particular there is a lack of analytical methods that are informative and efficient for process control. Here the potential of image analysis as one part of the solution to this issue is explored, using pluripotent stem cell colonies as a valuable and challenging exemplar. The Cell-IQ live cell imaging platform was used to build image libraries of morphological culture attributes such as colony "edge," "core periphery" or "core" cells. Conventional biomarkers, such as Oct3/4, Nanog, and Sox-2, were shown to correspond to specific morphologies using immunostaining and flow cytometry techniques. Quantitative monitoring of these morphological attributes in-process using the reference image libraries showed rapid sensitivity to changes induced by different media exchange regimes or the addition of mesoderm lineage inducing cytokine BMP4. The imaging sample size to precision relationship was defined for each morphological attribute to show that this sensitivity could be achieved with a relatively low imaging sample. Further, the morphological state of single colonies could be correlated to individual colony outcomes; smaller colonies were identified as optimum for homogenous early mesoderm differentiation, while larger colonies maintained a morphologically pluripotent core. Finally, we show the potential of the same image libraries to assess cell number in culture with accuracy comparable to sacrificial digestion and counting. The data supports a potentially powerful role for quantitative image analysis in the setting of in-process specifications, and also for screening the effects of process actions during development, which is highly complementary to current analysis in optimization and manufacture.

  7. A three-dimensional multivariate image processing technique for the analysis of FTIR spectroscopic images of multiple tissue sections

    Directory of Open Access Journals (Sweden)

    Evans Corey J

    2006-10-01

    Full Text Available Abstract Background Three-dimensional (3D multivariate Fourier Transform Infrared (FTIR image maps of tissue sections are presented. A villoglandular adenocarcinoma from a cervical biopsy with a number of interesting anatomical features was used as a model system to demonstrate the efficacy of the technique. Methods Four FTIR images recorded using a focal plane array detector of adjacent tissue sections were stitched together using a MATLAB® routine and placed in a single data matrix for multivariate analysis using Cytospec™. Unsupervised Hierarchical Cluster Analysis (UHCA was performed simultaneously on all 4 sections and 4 clusters plotted. The four UHCA maps were then stacked together and interpolated with a box function using SCIRun software. Results The resultant 3D-images can be rotated in three-dimensions, sliced and made semi-transparent to view the internal structure of the tissue block. A number of anatomical and histopathological features including connective tissue, red blood cells, inflammatory exudate and glandular cells could be identified in the cluster maps and correlated with Hematoxylin & Eosin stained sections. The mean extracted spectra from individual clusters provide macromolecular information on tissue components. Conclusion 3D-multivariate imaging provides a new avenue to study the shape and penetration of important anatomical and histopathological features based on the underlying macromolecular chemistry and therefore has clear potential in biology and medicine.

  8. Electron Microscopy and Image Processing: Essential Tools for Structural Analysis of Macromolecules.

    Science.gov (United States)

    Belnap, David M

    2015-11-02

    Macromolecular electron microscopy typically depicts the structures of macromolecular complexes ranging from ∼200 kDa to hundreds of MDa. The amount of specimen required, a few micrograms, is typically 100 to 1000 times less than needed for X-ray crystallography or nuclear magnetic resonance spectroscopy. Micrographs of frozen-hydrated (cryogenic) specimens portray native structures, but the original images are noisy. Computational averaging reduces noise, and three-dimensional reconstructions are calculated by combining different views of free-standing particles ("single-particle analysis"). Electron crystallography is used to characterize two-dimensional arrays of membrane proteins and very small three-dimensional crystals. Under favorable circumstances, near-atomic resolutions are achieved. For structures at somewhat lower resolution, pseudo-atomic models are obtained by fitting high-resolution components into the density. Time-resolved experiments describe dynamic processes. Electron tomography allows reconstruction of pleiomorphic complexes and subcellular structures and modeling of macromolecules in their cellular context. Significant information is also obtained from metal-coated and dehydrated specimens. Copyright © 2015 John Wiley & Sons, Inc.

  9. A standard data set for performance analysis of advanced IR image processing techniques

    NARCIS (Netherlands)

    Weiss, A.R.; Adomeit, U.; Chevalier, P.; Landeau, S.; Bijl, P.; Champagnat, F.; Dijk, J.; Göhler, B.; Landini, S.; Reynolds, J.P.; Smith, L.N.

    2012-01-01

    Modern IR cameras are increasingly equipped with built-in advanced (often non-linear) image and signal processing algorithms (like fusion, super-resolution, dynamic range compression etc.) which can tremendously influence performance characteristics. Traditional approaches to range performance model

  10. Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images

    Science.gov (United States)

    Nagao, Jiro; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka

    2006-03-01

    We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.

  11. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo

    2016-01-01

    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  12. Effect of feed processing on size of (washed) faeces particles from pigs measured by image analysis

    DEFF Research Database (Denmark)

    Nørgaard, Peder; Kornfelt, Louise Foged; Hansen, Christian Fink

    2005-01-01

    of particles from the sieving fractions were scanned and the length and width of individual particles were identified using image analysis software. The overall mean, mode and median were estimated from a composite function. The dietary physical characteristics significantly affected the proportion of faecal...

  13. Statistical Image Processing.

    Science.gov (United States)

    1982-11-16

    spectral analysist texture image analysis and classification, __ image software package, automatic spatial clustering.ITWA domenit hi ba apa for...ICOLOR(256),IBW(256) 1502 FORMATO (30( CNO(N): fF12.1)) 1503 FORMAT(o *FMINo DMRGE:0f2E20.8) 1504 FORMAT(/o IMRGE:or15) 1505 FOR14ATV FIRST SUBIMAGE:v...1506 FORMATO ’ JOIN CLUSTER NL:0) 1507 FORMAT( NEW CLUSTER:O) 1508 FORMAT( LLBS.GE.600) 1532 FORMAT(15XoTHETA ,7X, SIGMA-SQUAREr3Xe MERGING-DISTANCE

  14. A comparative analysis of pre-processing techniques in colour retinal images

    Energy Technology Data Exchange (ETDEWEB)

    Salvatelli, A [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Bizai, G [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Barbosa, G [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Drozdowicz, B [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Delrieux, C [Electric and Computing Engineering Department, Universidad Nacional del Sur, Alem 1253, BahIa Blanca, (Partially funded by SECyT-UNS) (Argentina)], E-mail: claudio@acm.org

    2007-11-15

    Diabetic retinopathy (DR) is a chronic disease of the ocular retina, which most of the times is only discovered when the disease is on an advanced stage and most of the damage is irreversible. For that reason, early diagnosis is paramount for avoiding the most severe consequences of the DR, of which complete blindness is not uncommon. Unsupervised or supervised image processing of retinal images emerges as a feasible tool for this diagnosis. The preprocessing stages are the key for any further assessment, since these images exhibit several defects, including non uniform illumination, sampling noise, uneven contrast due to pigmentation loss during sampling, and many others. Any feasible diagnosis system should work with images where these defects were compensated. In this work we analyze and test several correction techniques. Non uniform illumination is compensated using morphology and homomorphic filtering; uneven contrast is compensated using morphology and local enhancement. We tested our processing stages using Fuzzy C-Means, and local Hurst (self correlation) coefficient for unsupervised segmentation of the abnormal blood vessels. The results over a standard set of DR images are more than promising.

  15. Image processing techniques for acoustic images

    Science.gov (United States)

    Murphy, Brian P.

    1991-06-01

    The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge detection and segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering.

  16. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  17. Automated image processing and analysis of cartilage MRI: enabling technology for data mining applied to osteoarthritis

    Science.gov (United States)

    Tameem, Hussain Z.; Sinha, Usha S.

    2011-01-01

    Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features. PMID:21785520

  18. Automated image processing and analysis of cartilage MRI: enabling technology for data mining applied to osteoarthritis.

    Science.gov (United States)

    Tameem, Hussain Z; Sinha, Usha S

    2007-01-01

    Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features.

  19. Image processing analysis of nuclear track parameters for CR-39 detector irradiated by thermal neutron

    Science.gov (United States)

    Al-Jobouri, Hussain A.; Rajab, Mustafa Y.

    2016-03-01

    CR-39 detector which covered with boric acid (H3Bo3) pellet was irradiated by thermal neutrons from (241Am - 9Be) source with activity 12Ci and neutron flux 105 n. cm-2. s-1. The irradiation times -TD for detector were 4h, 8h, 16h and 24h. Chemical etching solution for detector was sodium hydroxide NaOH, 6.25N with 45 min etching time and 60 C˚ temperature. Images of CR-39 detector after chemical etching were taken from digital camera which connected from optical microscope. MATLAB software version 7.0 was used to image processing. The outputs of image processing of MATLAB software were analyzed and found the following relationships: (a) The irradiation time -TD has behavior linear relationships with following nuclear track parameters: i) total track number - NT ii) maximum track number - MRD (relative to track diameter - DT) at response region range 2.5 µm to 4 µm iii) maximum track number - MD (without depending on track diameter - DT). (b) The irradiation time -TD has behavior logarithmic relationship with maximum track number - MA (without depending on track area - AT). The image processing technique principally track diameter - DT can be take into account to classification of α-particle emitters, In addition to the contribution of these technique in preparation of nano- filters and nano-membrane in nanotechnology fields.

  20. Retinomorphic image processing.

    Science.gov (United States)

    Ghosh, Kuntal; Bhaumik, Kamales; Sarkar, Sandip

    2008-01-01

    The present work is aimed at understanding and explaining some of the aspects of visual signal processing at the retinal level while exploiting the same towards the development of some simple techniques in the domain of digital image processing. Classical studies on retinal physiology revealed the nature of contrast sensitivity of the receptive field of bipolar or ganglion cells, which lie in the outer and inner plexiform layers of the retina. To explain these observations, a difference of Gaussian (DOG) filter was suggested, which was subsequently modified to a Laplacian of Gaussian (LOG) filter for computational ease in handling two-dimensional retinal inputs. Till date almost all image processing algorithms, used in various branches of science and engineering had followed LOG or one of its variants. Recent observations in retinal physiology however, indicate that the retinal ganglion cells receive input from a larger area than the classical receptive fields. We have proposed an isotropic model for the non-classical receptive field of the retinal ganglion cells, corroborated from these recent observations, by introducing higher order derivatives of Gaussian expressed as linear combination of Gaussians only. In digital image processing, this provides a new mechanism of edge detection on one hand and image half-toning on the other. It has also been found that living systems may sometimes prefer to "perceive" the external scenario by adding noise to the received signals in the pre-processing level for arriving at better information on light and shade in the edge map. The proposed model also provides explanation to many brightness-contrast illusions hitherto unexplained not only by the classical isotropic model but also by some other Gestalt and Constructivist models or by non-isotropic multi-scale models. The proposed model is easy to implement both in the analog and digital domain. A scheme for implementation in the analog domain generates a new silicon retina

  1. 2-D nonlinear IIR-filters for image processing - An exploratory analysis

    Science.gov (United States)

    Bauer, P. H.; Sartori, M.

    1991-01-01

    A new nonlinear IIR filter structure is introduced and its deterministic properties are analyzed. It is shown to be better suited for image processing applications than its linear shift-invariant counterpart. The new structure is obtained from causality inversion of a 2D quarterplane causal linear filter with respect to the two directions of propagation. It is demonstrated, that by using this design, a nonlinear 2D lowpass filter can be constructed, which is capable of effectively suppressing Gaussian or impulse noise without destroying important image information.

  2. Analysis of mismatched heterointerfaces by combined HREM image processing and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Moebus, G.; Inkson, B.J. [Univ. of Sheffield, Dept. of Engineering Materials, Sheffield (United Kingdom); Levay, A. [Eoetvoes Univ., Dept. of Solid State Physics, Budapest (Hungary); Hytch, M.J. [Centre d' Etudes de Chimie Metallurgique, CNRS, Vitry-sur-Seine (France); Trampert, A. [Paul-Drude-Inst., Berlin (Germany); Wagner, T. [Max-Planck-Inst. fuer Metallforschung, Stuttgart (Germany)

    2003-04-01

    Lattice mismatched heterointerfaces are classified by a simple five parameter configuration space which allows to quantify the following properties: gradual partial coherence, distribution and localisation of misfit dislocations, anisotropy of strain fields, elastic dissimilarity of the lattices. HREM images are digitally processed into one-dimensional strain and Fourier spectrum profiles along the interface at selected distance from the interface. The interpretation of these profiles as a Fourier expansion of displacement waves is justified through a link to continuum modelling approaches presented earlier. Limitations and microscope conditions for this simple direct image interpretation approach are listed and discussed. (orig.)

  3. 2-D nonlinear IIR-filters for image processing - An exploratory analysis

    Science.gov (United States)

    Bauer, P. H.; Sartori, M.

    1991-01-01

    A new nonlinear IIR filter structure is introduced and its deterministic properties are analyzed. It is shown to be better suited for image processing applications than its linear shift-invariant counterpart. The new structure is obtained from causality inversion of a 2D quarterplane causal linear filter with respect to the two directions of propagation. It is demonstrated, that by using this design, a nonlinear 2D lowpass filter can be constructed, which is capable of effectively suppressing Gaussian or impulse noise without destroying important image information.

  4. Microfluidic electrochemical device and process for chemical imaging and electrochemical analysis at the electrode-liquid interface in-situ

    Science.gov (United States)

    Yu, Xiao-Ying; Liu, Bingwen; Yang, Li; Zhu, Zihua; Marshall, Matthew J.

    2016-03-01

    A microfluidic electrochemical device and process are detailed that provide chemical imaging and electrochemical analysis under vacuum at the surface of the electrode-sample or electrode-liquid interface in-situ. The electrochemical device allows investigation of various surface layers including diffuse layers at selected depths populated with, e.g., adsorbed molecules in which chemical transformation in electrolyte solutions occurs.

  5. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    , it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...... of the generalities relevant for an understanding of Gabor analysis of functions on Rd. We pay special attention to the case d = 2, which is the most important case for image processing and image analysis applications. The chapter is organized as follows. Section 2 presents central tools from functional analysis......, the application of Gabor expansions to image representation is considered in Sect. 6....

  6. Digital image processing applied to analysis of geophysical and geochemical data for southern Missouri

    Science.gov (United States)

    Guinness, E. A.; Arvidson, R. E.; Leff, C. E.; Edwards, M. H.; Bindschadler, D. L.

    1983-01-01

    Digital image-processing techniques have been used to analyze a variety of geophysical and geochemical map data covering southern Missouri, a region with important basement and strata-bound mineral deposits. Gravity and magnetic anomaly patterns, which have been reformatted to image displays, indicate a deep crustal structure cutting northwest-southeast through southern Missouri. In addition, geologic map data, topography, and Landsat multispectral scanner images have been used as base maps for the digital overlay of aerial gamma-ray and stream sediment chemical data for the 1 x 2-deg Rolla quadrangle. Results indicate enrichment of a variety of elements within the clay-rich alluvium covering many of the interfluvial plains, as well as a complicated pattern of enrichment for the sedimentary units close to the Precambrian rhyolites and granites of the St. Francois Mountains.

  7. Surface defect detection in tiling Industries using digital image processing methods: analysis and evaluation.

    Science.gov (United States)

    Karimi, Mohammad H; Asemani, Davud

    2014-05-01

    Ceramic and tile industries should indispensably include a grading stage to quantify the quality of products. Actually, human control systems are often used for grading purposes. An automatic grading system is essential to enhance the quality control and marketing of the products. Since there generally exist six different types of defects originating from various stages of tile manufacturing lines with distinct textures and morphologies, many image processing techniques have been proposed for defect detection. In this paper, a survey has been made on the pattern recognition and image processing algorithms which have been used to detect surface defects. Each method appears to be limited for detecting some subgroup of defects. The detection techniques may be divided into three main groups: statistical pattern recognition, feature vector extraction and texture/image classification. The methods such as wavelet transform, filtering, morphology and contourlet transform are more effective for pre-processing tasks. Others including statistical methods, neural networks and model-based algorithms can be applied to extract the surface defects. Although, statistical methods are often appropriate for identification of large defects such as Spots, but techniques such as wavelet processing provide an acceptable response for detection of small defects such as Pinhole. A thorough survey is made in this paper on the existing algorithms in each subgroup. Also, the evaluation parameters are discussed including supervised and unsupervised parameters. Using various performance parameters, different defect detection algorithms are compared and evaluated.

  8. Image Processing, Computer Vision, and Deep Learning: new approaches to the analysis and physics interpretation of LHC events

    Science.gov (United States)

    Schwartzman, A.; Kagan, M.; Mackey, L.; Nachman, B.; De Oliveira, L.

    2016-10-01

    This review introduces recent developments in the application of image processing, computer vision, and deep neural networks to the analysis and interpretation of particle collision events at the Large Hadron Collider (LHC). The link between LHC data analysis and computer vision techniques relies on the concept of jet-images, building on the notion of a particle physics detector as a digital camera and the particles it measures as images. We show that state-of-the-art image classification techniques based on deep neural network architectures significantly improve the identification of highly boosted electroweak particles with respect to existing methods. Furthermore, we introduce new methods to visualize and interpret the high level features learned by deep neural networks that provide discrimination beyond physics- derived variables, adding a new capability to understand physics and to design more powerful classification methods at the LHC.

  9. Differential morphology and image processing.

    Science.gov (United States)

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  10. Automated processing of label-free Raman microscope images of macrophage cells with standardized regression for high-throughput analysis.

    Science.gov (United States)

    Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I

    2010-11-19

    Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without

  11. Image processing with ImageJ

    NARCIS (Netherlands)

    Abramoff, M.D.; Magalhães, Paulo J.; Ram, Sunanda J.

    2004-01-01

    Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS). Image

  12. An image processing framework for automated analysis of swimming behavior in tadpoles with vestibular alterations

    Science.gov (United States)

    Zarei, Kasra; Fritzsch, Bernd; Buchholz, James H. J.

    2017-03-01

    Micogravity, as experienced during prolonged space flight, presents a problem for space exploration. Animal models, specifically tadpoles, with altered connections of the vestibular ear allow the examination of the effects of microgravity and can be quantitatively monitored through tadpole swimming behavior. We describe an image analysis framework for performing automated quantification of tadpole swimming behavior. Speckle reducing anisotropic diffusion is used to smooth tadpole image signals by diffusing noise while retaining edges. A narrow band level set approach is used for sharp tracking of the tadpole body. The use of level set method for interface tracking provides an inherent advantage of using level set based image segmentation algorithm (active contouring). Active contour segmentation is followed by two-dimensional skeletonization, which allows the automated quantification of tadpole deflection angles, and subsequently tadpole escape (or C-start) response times. Evaluation of the image analysis methodology was performed by comparing the automated quantifications of deflection angles to manual assessments (obtained using a standard grading scheme), and produced a high correlation (r2 = 0.99) indicating high reliability and accuracy of the proposed method. The methods presented form an important element of objective quantification of the escape response of the tadpole vestibular system to mechanical and biochemical manipulations, and can ultimately contribute to a better understanding of the effects of altered gravity perception on humans.

  13. IMAGE ENHANCEMENT USING IMAGE FUSION AND IMAGE PROCESSING TECHNIQUES

    OpenAIRE

    Arjun Nelikanti

    2015-01-01

    Principle objective of Image enhancement is to process an image so that result is more suitable than original image for specific application. Digital image enhancement techniques provide a multitude of choices for improving the visual quality of images. Appropriate choice of such techniques is greatly influenced by the imaging modality, task at hand and viewing conditions. This paper will provide a combination of two concepts, image fusion by DWT and digital image processing techniques. The e...

  14. Eliminating "Hotspots" in Digital Image Processing

    Science.gov (United States)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  15. Image processing analysis of nuclear track parameters for CR-39 detector irradiated by thermal neutron

    Energy Technology Data Exchange (ETDEWEB)

    Al-Jobouri, Hussain A., E-mail: hahmed54@gmail.com; Rajab, Mustafa Y., E-mail: mostafaheete@gmail.com [Department of Physics, College of Science, AL-Nahrain University, Baghdad (Iraq)

    2016-03-25

    CR-39 detector which covered with boric acid (H{sub 3}Bo{sub 3}) pellet was irradiated by thermal neutrons from ({sup 241}Am - {sup 9}Be) source with activity 12Ci and neutron flux 10{sup 5} n. cm{sup −2}. s{sup −1}. The irradiation times -T{sub D} for detector were 4h, 8h, 16h and 24h. Chemical etching solution for detector was sodium hydroxide NaOH, 6.25N with 45 min etching time and 60 C° temperature. Images of CR-39 detector after chemical etching were taken from digital camera which connected from optical microscope. MATLAB software version 7.0 was used to image processing. The outputs of image processing of MATLAB software were analyzed and found the following relationships: (a) The irradiation time -T{sub D} has behavior linear relationships with following nuclear track parameters: i) total track number - N{sub T} ii) maximum track number - MRD (relative to track diameter - D{sub T}) at response region range 2.5 µm to 4 µm iii) maximum track number - M{sub D} (without depending on track diameter - D{sub T}). (b) The irradiation time -T{sub D} has behavior logarithmic relationship with maximum track number - M{sub A} (without depending on track area - A{sub T}). The image processing technique principally track diameter - D{sub T} can be take into account to classification of α-particle emitters, In addition to the contribution of these technique in preparation of nano- filters and nano-membrane in nanotechnology fields.

  16. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  17. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  18. Analysis of Image Processing Related Artifacts in MRI%MRI中图像处理相关伪影分析

    Institute of Scientific and Technical Information of China (English)

    王昭波; 李文华; 王立忠; 曹庆选

    2013-01-01

    目的:消除MRI中的图像处理相关伪影,改善MRI图像质量。方法总结我院最近一年的5632例MRI检查图像,把其中有图像处理相关伪影的病例归纳分类,进行伪影分析。结果按照MRI图像处理相关伪影的成因及特点,可分为:卷褶伪影、化学位移伪影、截断伪影、部分容积效应、鬼影( ghost)五大类。结论正确认识各种图像处理相关伪影的特点,应用相应的校正方法,对改善MRI质量,提高诊断准确率有重要意义。%Objective To eliminate the Image processing related artifacts in MRI imaging , and improve the image quality .Method Analysis and classification of the Image processing related artifacts in 5632 cases with MRI imaging in the most recent year in our hospital .Results According to the causes and characteristics, the MRI Image processing related artifacts can be divided into tucked artifacts , chemical shift artifact, truncation artifacts, partial volume effect and ghost.Conclusions To improve the quality of MRI image and the diagnostic accuracy , it is important to understand and eliminate these Image processing related artifacts .

  19. Performance analysis of massively parallel embedded hardware architectures for retinal image processing

    Directory of Open Access Journals (Sweden)

    Osorio Roberto

    2011-01-01

    Full Text Available Abstract This paper examines the implementation of a retinal vessel tree extraction technique on different hardware platforms and architectures. Retinal vessel tree extraction is a representative application of those found in the domain of medical image processing. The low signal-to-noise ratio of the images leads to a large amount of low-level tasks in order to meet the accuracy requirements. In some applications, this might compromise computing speed. This paper is focused on the assessment of the performance of a retinal vessel tree extraction method on different hardware platforms. In particular, the retinal vessel tree extraction method is mapped onto a massively parallel SIMD (MP-SIMD chip, a massively parallel processor array (MPPA and onto an field-programmable gate arrays (FPGA.

  20. DIGITAL IMAGES PROCESSING IN RADIOGRAPHY

    OpenAIRE

    Pilař, Martin

    2010-01-01

    This thesis is focused primarily on digital image processing and modern imaging modalities algorithms. An algorithm means a method for solving a problem or an instruction. In image processing an algorithm presents the process from data acquisition to the resulting image displayed on the monitor. Therefore, in the first part of the thesis a brief overview of principles of imaging modalities used in radiodiagnostics is given. Collected data have to be analyzed and modelled in a certain way. The...

  1. Simulation and analysis of natural rain in a wind tunnel via digital image processing techniques

    Science.gov (United States)

    Aaron, K. M.; Hernan, M.; Parikh, P.; Sarohia, V.; Gharib, M.

    1986-01-01

    It is desired to simulate natural rain in a wind tunnel in order to investigate its influence on the aerodynamic characteristics of aircraft. Rain simulation nozzles have been developed and tested at JPL. Pulsed laser sheet illumination is used to photograph the droplets in the moving airstream. Digital image processing techniques are applied to these photographs for calculation of rain statistics to evaluate the performance of the nozzles. It is found that fixed hypodermic type nozzles inject too much water to simulate natural rain conditions. A modification uses two aerodynamic spinners to flex a tube in a pseudo-random fashion to distribute the water over a larger area.

  2. 3D digital image processing for biofilm quantification from confocal laser scanning microscopy: Multidimensional statistical analysis of biofilm modeling

    Science.gov (United States)

    Zielinski, Jerzy S.

    The dramatic increase in number and volume of digital images produced in medical diagnostics, and the escalating demand for rapid access to these relevant medical data, along with the need for interpretation and retrieval has become of paramount importance to a modern healthcare system. Therefore, there is an ever growing need for processed, interpreted and saved images of various types. Due to the high cost and unreliability of human-dependent image analysis, it is necessary to develop an automated method for feature extraction, using sophisticated mathematical algorithms and reasoning. This work is focused on digital image signal processing of biological and biomedical data in one- two- and three-dimensional space. Methods and algorithms presented in this work were used to acquire data from genomic sequences, breast cancer, and biofilm images. One-dimensional analysis was applied to DNA sequences which were presented as a non-stationary sequence and modeled by a time-dependent autoregressive moving average (TD-ARMA) model. Two-dimensional analyses used 2D-ARMA model and applied it to detect breast cancer from x-ray mammograms or ultrasound images. Three-dimensional detection and classification techniques were applied to biofilm images acquired using confocal laser scanning microscopy. Modern medical images are geometrically arranged arrays of data. The broadening scope of imaging as a way to organize our observations of the biophysical world has led to a dramatic increase in our ability to apply new processing techniques and to combine multiple channels of data into sophisticated and complex mathematical models of physiological function and dysfunction. With explosion of the amount of data produced in a field of biomedicine, it is crucial to be able to construct accurate mathematical models of the data at hand. Two main purposes of signal modeling are: data size conservation and parameter extraction. Specifically, in biomedical imaging we have four key problems

  3. Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing

    Science.gov (United States)

    Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.

    2016-06-01

    In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).

  4. UAV PHOTOGRAMMETRY WITH OBLIQUE IMAGES: FIRST ANALYSIS ON DATA ACQUISITION AND PROCESSING

    Directory of Open Access Journals (Sweden)

    I. Aicardi

    2016-06-01

    Full Text Available In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints. Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle and traditional COTS (Commercial Off-the-Shelf digital cameras (more compact and lighter than generally used devices, for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc. are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy.

  5. Smart Image Enhancement Process

    Science.gov (United States)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  6. Image processing and recognition for biological images.

    Science.gov (United States)

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target.

  7. A Low-Cost System Based on Image Analysis for Monitoring the Crystal Growth Process

    Directory of Open Access Journals (Sweden)

    Fabrício Venâncio

    2017-05-01

    Full Text Available Many techniques are used to monitor one or more of the phenomena involved in the crystallization process. One of the challenges in crystal growth monitoring is finding techniques that allow direct interpretation of the data. The present study used a low-cost system, composed of a commercial webcam and a simple white LED (Light Emitting Diode illuminator, to follow the calcium carbonate crystal growth process. The experiments were followed with focused beam reflectance measurement (FBRM, a common technique for obtaining information about the formation and growth of crystals. The images obtained in real time were treated with the red, blue, and green (RGB system. The results showed a qualitative response of the system to crystal formation and growth processes, as there was an observed decrease in the signal as the growth process occurred. Control of the crystal growth was managed by increasing the viscosity of the test solution with the addition of monoethylene glycol (MEG at 30% and 70% in a mass to mass relationship, providing different profiles of the RGB average curves. The decrease in the average RGB value became slower as the concentration of MEG was increased; this reflected a lag in the growth process that was proven by the FBRM.

  8. Fractal analysis of granular ore media based on computed tomography image processing

    Institute of Scientific and Technical Information of China (English)

    WU Ai-xiang; YANG Bao-hua; ZHOU Xu

    2008-01-01

    The cross-sectional images of nine groups of ore samples were obtained by X-ray computed tomography(CT) scanner.Based on CT image analysis,the fractal dimensions of solid matrix,pore space and matrix/pore interface of each sample were measured by using box counting method.The correlation of the three fractal dimensions with particle size,porosity,and seepage coefficient was investigated.The results show that for all images of these samples,the matrix phase has the highest dimension,followed by the pore phase,and the dimension of matrix-pore interface has the smallest value; the dimensions of matrix phase and matrix-pore interface are negatively and linearly correlated with porosity while the dimension of pore phase relates positively and linearly with porosity; the fractal dimension of matrix-pore interface relates negatively and linearly with seepage coefficient.Larger fractal dimension of matrix/pore interface indicates more irregular complicated channels for solution flow,resulting in low permeability.

  9. Computerized Processing and Analysis of CT Images for Developing a New Criterion in COPD Diagnosis

    CERN Document Server

    Hosseini, Mohammad-Parsa; Akhlaghpoor, Shahram

    2016-01-01

    Background: Chronic obstructive pulmonary disease (COPD) is one of the most prevalent and dangerous pulmonary diseases in the world. It is forecasted that COPD will be the third deadly disease in the future. Therefore, developing non-invasive methods for diagnosis of the disease would be helpful for physicians and patients. Methods: Based on clinical investigations and spirometry tests, ten adult patients with COPD (6 male and 4 female) with mean age of 49.8 years were enrolled as the case group. In addition, ten age and sex-matched healthy, non-COPD individuals (6 male and 4 female) with mean age of 45.4 years were recruited as the controls. Lung CT-scan images of the subjects were processed and analyzed by a computer to find a relationship. Findings: The elasticity of lung parenchyma variation was obtained with digital image processing. The normalized average of this pattern was found to be 21.6% in patients and 40.7% in controls. In addition, normalized mean value of Hounsfield unit variations in square 10...

  10. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika

    2009-01-01

    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  11. Analysis of irradiated U-7wt%Mo dispersion fuel microstructures using automated image processing

    Science.gov (United States)

    Collette, R.; King, J.; Buesch, C.; Keiser, D. D.; Williams, W.; Miller, B. D.; Schulthess, J.

    2016-07-01

    The High Performance Research Reactor Fuel Development (HPPRFD) program is responsible for developing low enriched uranium (LEU) fuel substitutes for high performance reactors fueled with highly enriched uranium (HEU) that have not yet been converted to LEU. The uranium-molybdenum (U-Mo) fuel system was selected for this effort. In this study, fission gas pore segmentation was performed on U-7wt%Mo dispersion fuel samples at three separate fission densities using an automated image processing interface developed in MATLAB. Pore size distributions were attained that showed both expected and unexpected fission gas behavior. In general, it proved challenging to identify any dominant trends when comparing fission bubble data across samples from different fuel plates due to varying compositions and fabrication techniques. The results exhibited fair agreement with the fission density vs. porosity correlation developed by the Russian reactor conversion program.

  12. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R

    1996-01-01

    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  13. No-reference analysis of decoded MPEG images for PSNR estimation and post-processing

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Li, Huiying; Andersen, Jakob Dahl

    2011-01-01

    stream. Solutions are presented for MPEG-2 video. A method to estimate the quantization parameters of DCT coded images and MPEG I-frames at the macro-block level is presented. The results of this analysis is used for deblocking and deringing artifact reduction and no-reference PSNR estimation without...... code stream access. An adaptive deringing method using texture classification is presented. On the test set, the quantization parameters in MPEG-2 I-frames are estimated with an overall accuracy of 99.9% and the PSNR is estimated with an overall average error of 0.3dB. The deringing and deblocking...... algorithms yield improvements of 0.3dB on the MPEG-2 decoded test sequences....

  14. Image processing in diabetic related causes

    CERN Document Server

    Kumar, Amit

    2016-01-01

    This book is a collection of all the experimental results and analysis carried out on medical images of diabetic related causes. The experimental investigations have been carried out on images starting from very basic image processing techniques such as image enhancement to sophisticated image segmentation methods. This book is intended to create an awareness on diabetes and its related causes and image processing methods used to detect and forecast in a very simple way. This book is useful to researchers, Engineers, Medical Doctors and Bioinformatics researchers.

  15. Analysis of Spark Plug Gap on Flame Development using Schlieren Technique and Image Processing

    Science.gov (United States)

    Hii Shu-Yi, Paul; Khalid, Amir; Mohamad, Anuar; Manshoor, Bukhari; Sapit, Azwan; Zaman, Izzuddin; Hashim, Akasha

    2016-11-01

    Gasoline spark ignition system in cars remains one of the main consumption of fuel in the world nowadays. During combustion process, spark plug is one important key features in a gasoline engine. The incompatibility of spark plug gap width and the fuel used causing backfire and knocking in the combustion engine. Thus, the spark plug gap was studied with focussing in controlling the combustion process to improve the performance of the engine. The main purpose of this research is to investigate the effect of spark plug air gap on flame development. The parameters studied in this research include spark plug air gap width (1.0 mm, 1.2 mm, 1.4 mm, 1.6 mm and 1.8 mm), injection pressure (0.3 MPa, 0.4 MPa, 0.5 MPa and 0.6 MPa) and flame characteristics such as flame front area and the flame intensity. The flame front area of different spark plug gap and injection pressure were investigated through Schlieren photography method. The Schlieren images taken were analysed with the time changes. The experiment results proved that the increase of spark plug gap width will led to better flame development in shorter time while increased the chance of misfire.

  16. Use of image-processing tools for texture analysis of high-energy X-ray synchrotron data

    DEFF Research Database (Denmark)

    Fisker, Rune; Poulsen, Henning Friis; Schou, Jørgen

    1998-01-01

    , the background may vary substantially on a local scale as a result of inhomogeneities in the sample environment etc. A set of image-processing tools has been employed to overcome these complications. An automatic procedure for estimating the parameters of the traces (taken as ellipses) is described, based...... on a combination of a circular Hough transform and nonlinear least-squares fitting. Using the estimated ellipses the background is subtracted and the intensity along the Debye-Scherrer cones is integrated by a combined fit of the local diffraction pattern. The corresponding algorithms are presented together...... with the necessary coordinate transform for pole-figure determination. The image-processing tools may be useful for the analysis of noisy or partial powder diffraction data-sets in general, provided flat two-dimensional detectors are used....

  17. NEW HUMAN SEMEN ANALYSIS SYSTEM (CASA USING MICROSCOPIC IMAGE PROCESSING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    N M Chaudhari

    2016-11-01

    Full Text Available Computer assisted semen analysis (CASA helps the pathologist or fertility specialist to evaluate the human semen. Detail analysis of spermatozoa like morphology and motility is very important in the process of intrauterine insemination (IUI or In-vitro fertilization (IVF in infertile couple. The main objective for this new semen analysis is to provide a low cost solution to the pathologist and gynecologist for the routine raw semen analysis, finding the concentration of the semen with dynamic background removal and classify the spermatozoa type (grade according to the motility and structural abnormality as per the WHO criteria. In this paper a new system , computer assisted semen analysis system is proposed in which hybrid approach is used to identify the moving object, scan line algorithm is applied for confirmation of the objects having tails, so that we can count the actual number of spermatozoa. For removal of background initially the dynamic background generation algorithm is proposed to create a background for background subtraction stage. The standard data set is created with 40× and 100× magnification from the different raw semen s. For testing the efficiency of proposed algorithm, same frames are applied to the existing algorithm. Another module of the system is focused on finding the motility and Type classification of individual spermatozoa.

  18. Amplitude image processing by diffractive optics.

    Science.gov (United States)

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  19. Medical Image Analysis Facility

    Science.gov (United States)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  20. Image Analysis in CT Angiography

    NARCIS (Netherlands)

    Manniesing, R.

    2006-01-01

    In this thesis we develop and validate novel image processing techniques for the analysis of vascular structures in medical images. First a new type of filter is proposed which is capable of enhancing vascular structures while suppressing noise in the remainder of the image. This filter is based on

  1. Analysis on Image Processing of Human Hip Joints during Lifting Using MAT Lab and ANSYS

    Directory of Open Access Journals (Sweden)

    N. Sundaram

    2013-08-01

    Full Text Available Human Joint paints exhibit abnormal motion and vise versa during movements. Most of the patients were suffering from joint paints. This joint paints like Hip joints, Knee joints, Foot joints, Shoulder joints Elbow joints, and Wrist joints. Patients suffering from joint disorders visit a therapist. The therapist must correlate all these information sources regarding joint Problems. Most probable one third of all jobs in industry involve Manual Material Handling (MMH. This Manual Material Handling of human poses risk to many and cause back pain, joint pains and other problems like Knee joints, wrist joints, Shoulder joints, etc. A finite element model is used to study about the stress of human joints. Image processing techniques using soft computing like MAT Lab and ANSYS are used. A Biomedical model has been used for optimizing the lifting posture for minimum efforts. This model is also used to predict the lifting material in every individual human being. This study can be extended for loading of muscles.

  2. Singularity Analysis: a powerful image processing tool in remote sensing of the oceans

    Science.gov (United States)

    Turiel, A.; Umbert, M.; Hoareau, N.; Ballabrera-Poy, J.; Portabella, M.

    2012-04-01

    The study of fully developed turbulence has given rise to the development of new methods to describe real data of scalars submitted to the action of a turbulent flow. The application of this brand of methodologies (known as Microcanonical Multifractal Formalism, MMF) on remote sensing ocean maps open new ways to exploit those data for oceanographic purposes. The main technique in MMF is that of Singularity Analysis (SA). By means of SA a singularity exponents is assigned to each point of a given image. The singularity exponent of a given point is a dimensionless measure of the regularity or irregularity of the scalar at that point. Singularity exponents arrange in singularity lines, which accurately track the flow streamlines from any scalar, as we have verified with remote sensing and simulated data. Applications of SA include quality assessment of different products, the estimation of surface velocities, the development of fusion techniques for different types of scalars, comparison with measures of ocean mixing, and improvement in assimilation schemes.

  3. Quantitative image processing in fluid mechanics

    Science.gov (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  4. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance... number of techniques have been suggested for restoration 37 of degraded images like inverse filter, wiener filter and constrained least square filter etc. The primary objective of scene analysis is to deduce from a single two dimensional image...

  5. Processing of MRI images weighted in TOF for blood vessels analysis: 3-D reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez D, J.; Cordova F, T. [Universidad de Guanajuato, Campus Leon, Departamento de Ingenieria Fisica, Loma del Bosque No. 103, Lomas del Campestre, 37150 Leon, Guanajuato (Mexico); Cruz A, I., E-mail: hernandezdj.gto@gmail.com [CONACYT, Centro de Investigacion en Matematicas, A. C., Jalisco s/n, Col. Valenciana, 36000 Guanajuato, Gto. (Mexico)

    2015-10-15

    This paper presents a novel presents an approach based on differences of intensities for the identification of vascular structures in medical images from MRI studies of type time of flight method (TOF). The plating method hypothesis gave high intensities belonging to the vascular system image type TOF can be segmented by thresholding of the histogram. The enhanced vascular structures is performed using the filter Vesselness, upon completion of a decision based on fuzzy thresholding minimizes error in the selection of vascular structures. It will give a brief introduction to the vascular system problems and how the images have helped diagnosis, is summarized the physical history of the different imaging modalities and the evolution of digital images with computers. Segmentation and 3-D reconstruction became image type time of flight; these images are typically used in medical diagnosis of cerebrovascular diseases. The proposed method has less error in segmentation and reconstruction of volumes related to the vascular system, clear images and less noise compared with edge detection methods. (Author)

  6. Image Processing Research

    Science.gov (United States)

    1975-09-30

    linear. c). The prediction is to be based on a selected small number of past estimates. This will impose a desired limited memory requirement for the...otservational ericra can lead to oscillatory estimates. Since c is generally quite smooth, it is reasonable to impose some suopthing constraints on... figura 4 continuous Gaussian noise was added to an image. Median filtering resmltinq in a slight visual improvement. For image enhancement applications

  7. Optical and digital image processing

    CERN Document Server

    Cristobal, Gabriel; Thienpont, Hugo

    2011-01-01

    In recent years, Moore's law has fostered the steady growth of the field of digital image processing, though the computational complexity remains a problem for most of the digital image processing applications. In parallel, the research domain of optical image processing has matured, potentially bypassing the problems digital approaches were suffering and bringing new applications. The advancement of technology calls for applications and knowledge at the intersection of both areas but there is a clear knowledge gap between the digital signal processing and the optical processing communities. T

  8. Industrial Applications of Image Processing

    Science.gov (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  9. Color Medical Image Analysis

    CERN Document Server

    Schaefer, Gerald

    2013-01-01

    Since the early 20th century, medical imaging has been dominated by monochrome imaging modalities such as x-ray, computed tomography, ultrasound, and magnetic resonance imaging. As a result, color information has been overlooked in medical image analysis applications. Recently, various medical imaging modalities that involve color information have been introduced. These include cervicography, dermoscopy, fundus photography, gastrointestinal endoscopy, microscopy, and wound photography. However, in comparison to monochrome images, the analysis of color images is a relatively unexplored area. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for monochrome images are often not directly applicable to multichannel images. The goal of this volume is to summarize the state-of-the-art in the utilization of color information in medical image analysis.

  10. Trends In Microcomputer Image Processing

    Science.gov (United States)

    Strum, William E.

    1988-05-01

    We have seen, in the last four years, the microcomputer become the platform of choice for many image processing applications. By 1991, Frost and Sullivan forecasts that 75% of all image processing will be carried out on microcomputers. Many factors have contributed to this trend and will be discussed in the following paper.

  11. Faulting evidence of isostatic uplift in the Rincon Mountains metamorphic core complex: An image processing analysis

    Science.gov (United States)

    Rodriguez-Guerra, Edna Patricia

    This study focuses on the applications of remote sensing techniques and digital analysis to characterizing of tectonic features of the Rincon Mountains metamorphic core complex. Data included Landsat Thematic Mapper (TM) images, digital elevation models (DEM), and digital orthophoto quadrangle quads (DOQQ). The main findings in this study are two nearly orthogonal systems of structures that have never been reported in the Rincon Mountains. The first system, a penetrative faulting system of the footwall rocks, trends N10--30°W. Similar structures identified in other metamorphic core complexes. The second system trends N60--70°E, and has only been alluded indirectly in the literature of metamorphic core complexes. The structures pervade mylonites in Tanque Verde Mountain, Mica Mountain, and the Rincon Peak area. As measured on the imagery, spacing between the N10--30°W lineaments ranges from ˜0.5 to 2 km, and from 0.25 to 1 km for the N60--70°E system. Field inspection reveals that the N10--30°W trending system, are high-angle normal faults dipping mainly to the west. One of the main faults, named here the Cabeza de Vaca fault, has a polished, planar, striated and grooved surface with slickenlines indicating pure normal dip-slip movement (N10°W, 83°SW; slickensides rake 85°SW). The Cabeza de Vaca fault is the eastern boundary of a 2 km-wide graben, with displacement as great as 400 meters. The N10--30°W faults are syn- to post-mylonitic, high-angle normal faults that formed during isostatic uplift of the Rincon core complex during mid-Tertiary time. This interpretation is based on previous works, which report similar fault patterns in other metamorphic core complexes. Faults trending N20--30°W, shape the east flank of Mica Mountain. These faults, on the back dipping mylonitic zone, dip east and may represent late-stage antithetic shear zones. The Cabeza de Vaca fault and the back dipping antithetic faults accommodate as much as 65% of the extension due to

  12. SWNT Imaging Using Multispectral Image Processing

    Science.gov (United States)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  13. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  14. Cellular automata in image processing and geometry

    CERN Document Server

    Adamatzky, Andrew; Sun, Xianfang

    2014-01-01

    The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...

  15. A cloud solution for medical image processing

    Directory of Open Access Journals (Sweden)

    Ali Mirarab,

    2014-07-01

    Full Text Available The rapid growth in the use of Electronic Health Records across the globe along with the rich mix of multimedia held within an EHR combined with the increasing level of detail due to advances in diagnostic medical imaging means increasing amounts of data can be stored for each patient. Also lack of image processing and analysis tools for handling the large image datasets has compromised researchers and practitioner‟s outcome. Migrating medical imaging applications and data to the Cloud can allow healthcare organizations to realize significant cost savings relating to hardware, software, buildings, power and staff, in addition to greater scalability, higher performance and resilience. This paper reviews medical image processing and its challenges, states cloud computing and cloud computing benefits due to medical image processing. Also, this paper introduces tools and methods for medical images processing using the cloud. Finally a method is provided for medical images processing based on Eucalyptus cloud infrastructure with image processing software “ImageJ” and using improved genetic algorithm for the allocation and distribution of resources. Based on conducted simulations and experimental results, the proposed method brings high scalability, simplicity, flexibility and fully customizability in addition to 40% cost reduction and twice increase in speed.

  16. Morphological image analysis

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, H. De; Kawakatsu, T.

    2000-01-01

    We describe a morphological image analysis method to characterize images in terms of geometry and topology. We present a method to compute the morphological properties of the objects building up the image and apply the method to triply periodic minimal surfaces and to images taken from polymer chemi

  17. Morphological image analysis

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, H; Kawakatsu, T; Landau, DP; Lewis, SP; Schuttler, HB

    2001-01-01

    We describe a morphological image analysis method to characterize images in terms of geometry and topology. We present a method to compute the morphological properties of the objects building up the image and apply the method to triply periodic minimal surfaces and to images taken from polymer chemi

  18. 3D-CT imaging processing for qualitative and quantitative analysis of maxillofacial cysts and tumors

    Energy Technology Data Exchange (ETDEWEB)

    Cavalcanti, Marcelo de Gusmao Paraiso [Sao Paulo Univ., SP (Brazil). Faculdade de Odontologia. Dept. de Radiologia; Antunes, Jose Leopoldo Ferreira [Sao Paulo Univ., SP (Brazil). Faculdade de Odotologia. Dept. de Odontologia Social

    2002-09-01

    The objective of this study was to evaluate spiral-computed tomography (3D-CT) images of 20 patients presenting with cysts and tumors in the maxillofacial complex, in order to compare the surface and volume techniques of image rendering. The qualitative and quantitative appraisal indicated that the volume technique allowed a more precise and accurate observation than the surface method. On the average, the measurements obtained by means of the 3D volume-rendering technique were 6.28% higher than those obtained by means of the surface method. The sensitivity of the 3D surface technique was lower than that of the 3D volume technique for all conditions stipulated in the diagnosis and evaluation of lesions. We concluded that the 3D-CT volume rendering technique was more reproducible and sensitive than the 3D-CT surface method, in the diagnosis, treatment planning and evaluation of maxillofacial lesions, especially those with intra-osseous involvement. (author)

  19. Basic image analysis and manipulation in ImageJ.

    Science.gov (United States)

    Hartig, Sean M

    2013-01-01

    Image analysis methods have been developed to provide quantitative assessment of microscopy data. In this unit, basic aspects of image analysis are outlined, including software installation, data import, image processing functions, and analytical tools that can be used to extract information from microscopy data using ImageJ. Step-by-step protocols for analyzing objects in a fluorescence image and extracting information from two-color tissue images collected by bright-field microscopy are included.

  20. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  1. Image based performance analysis of thermal imagers

    Science.gov (United States)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  2. Hyperspectral image analysis. A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Amigo, José Manuel, E-mail: jmar@food.ku.dk [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Babamoradi, Hamid [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Elcoroaristizabal, Saioa [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Chemical and Environmental Engineering Department, School of Engineering, University of the Basque Country, Alameda de Urquijo s/n, E-48013 Bilbao (Spain)

    2015-10-08

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  3. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  4. Verification of nonlinear dynamic structural test results by combined image processing and acoustic analysis

    Science.gov (United States)

    Tene, Yair; Tene, Noam; Tene, G.

    1993-08-01

    An interactive data fusion methodology of video, audio, and nonlinear structural dynamic analysis for potential application in forensic engineering is presented. The methodology was developed and successfully demonstrated in the analysis of heavy transportable bridge collapse during preparation for testing. Multiple bridge elements failures were identified after the collapse, including fracture, cracks and rupture of high performance structural materials. Videotape recording by hand held camcorder was the only source of information about the collapse sequence. The interactive data fusion methodology resulted in extracting relevant information form the videotape and from dynamic nonlinear structural analysis, leading to full account of the sequence of events during the bridge collapse.

  5. Contribution of image analysis to the definition of explosibility of fine particles resulting from waste recycling process

    Science.gov (United States)

    Gente, V.; La Marca, F.

    2007-09-01

    In waste recycling processes, the development of comminution technologies is one of the main actions to improve the quality of recycled products. This involves a rise in fine particles production, which could have some effects on explosibility properties of materials. This paper reports the results of experiments done to examine the explosibility of the fine particles resulting from waste recycling process. Tests have been conducted for the products derived from milling processes operated in different operative conditions. In particular, the comminution tests have been executed varying the milling temperature by refrigerant agents. The materials utilized in explosibility tests were different typologies of plastics coming from waste products (PET, ABS and PP), characterized by size lower than 1 mm. The results of explosibility tests, carried out by mean of a Hartmann Apparatus, have been compared with the data derived from image analysis procedure aimed to measure the morphological characteristics of particles. For each typology of material, the propensity to explode appears to be correlated not only to particle size, but also to morphological properties, linked to the operative condition of the milling process.

  6. Use of neural image analysis methods in the process to determine the dry matter content in the compost

    Science.gov (United States)

    Wojcieszak, D.; Przybył, J.; Lewicki, A.; Ludwiczak, A.; Przybylak, A.; Boniecki, P.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Witaszek, K.

    2015-07-01

    The aim of this research was investigate the possibility of using methods of computer image analysis and artificial neural networks for to assess the amount of dry matter in the tested compost samples. The research lead to the conclusion that the neural image analysis may be a useful tool in determining the quantity of dry matter in the compost. Generated neural model may be the beginning of research into the use of neural image analysis assess the content of dry matter and other constituents of compost. The presented model RBF 19:19-2-1:1 characterized by test error 0.092189 may be more efficient.

  7. COMPARATIVE ANALYSIS OF KIRLIANOGRAFIIA IMAGES GLOW OF BIOLOGICAL TISSUES WITH BIOCHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    L. A. Pisotska

    2015-12-01

    the investigated samples. For kirlianograficeskih studies used an experimental device, RIVERS 1, developed by Ukrainian Scientific Research Institute of mechanical engineering technologies (Dnepropetrovsk. For mathematical processing of results using Matlab program. The growing shortage of ATP causes the breach and termination of ion exchange, increases reactive oxygen generation, lipid peroxidation destroys cell membranes. The process of self digestion (autoliza tissue tendons, as shown by the results of the experiments, had cyclical changes metabolism enzyme activity (ALT, carbohydrate (LDH, nucleotides, of total protein and micronutrients.

  8. COMPUTER IMAGE PROCESSING OF MICROSTRUCTURES OF GRAY CAST IRON AS A TOOL FOR QUANTITATIVE ANALYSIS OF GRAPHITE DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2013-01-01

    Full Text Available Based on gray cast iron microstructure with different lengths of flaky graphite inclusions contained in GOST 3443-87 «Cast iron with various forms of graphite. Methods for determining the structure «shows the possibilities of classification of microstructures ПГд15, ПГд25, ПГд45, ПГд90, ПГд180, ПГд350, ПГд750 and ПГд1000 based on image processing techniques that allows to develop a methodology for the transition from qualitative scale of microstructures used for the analysis of the graphite phase, to quantify.

  9. Large-scale analysis of high-speed atomic force microscopy data sets using adaptive image processing

    Directory of Open Access Journals (Sweden)

    Blake W. Erickson

    2012-11-01

    Full Text Available Modern high-speed atomic force microscopes generate significant quantities of data in a short amount of time. Each image in the sequence has to be processed quickly and accurately in order to obtain a true representation of the sample and its changes over time. This paper presents an automated, adaptive algorithm for the required processing of AFM images. The algorithm adaptively corrects for both common one-dimensional distortions as well as the most common two-dimensional distortions. This method uses an iterative thresholded processing algorithm for rapid and accurate separation of background and surface topography. This separation prevents artificial bias from topographic features and ensures the best possible coherence between the different images in a sequence. This method is equally applicable to all channels of AFM data, and can process images in seconds.

  10. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  11. Scanning electron microscopy combined with image processing technique: Microstructure and texture analysis of legumes and vegetables for instant meal.

    Science.gov (United States)

    Pieniazek, Facundo; Messina, Valeria

    2016-04-01

    Development and innovation of new technologies are necessary especially in food quality; due that most instrumental technique for measuring quality properties involves a considerable amount of manual work. Image analysis is a technique that allows to provide objective evaluations from digitalized images that can estimate quality parameters for consumer's acceptance. The aim of the present research was to study the effect of freeze drying on the microstructure and texture of legume and vegetables using scanning electron microscopy at different magnifications' combined with image analysis. Cooked and cooked freeze dried rehydrated legumes and vegetables were analyzed individually by scanning electron microscopy at different magnifications' (250, 500, and 1000×).Texture properties were analyzed by texture analyzer and image analysis. Significant differences (P image and instrumental texture parameters. A linear trend with a linear correlation was applied for instrumental and image features. Results showed that image features calculated from Grey level co-occurrence matrix at 1,000× had high correlations with instrumental features. In rice, homogeneity and contrast can be applied to evaluate texture parameters gumminess and adhesiviness; Lentils: contrast, correlation, energy, homogeneity, and entropy for hardness, adhesiviness, gumminess, and chewiness; Potato and carrots: contrast, energy, homogeneity and entropy for adhesiviness, chewiness, hardness, cohesiviness, and resilence. Results revealed that combing scanning electron microscopy with image analysis can be a useful tool to analyze quality parameters in legumes and vegetables.

  12. Automatic quantitative analysis of microstructure of ductile cast iron using digital image processing

    Directory of Open Access Journals (Sweden)

    Abhijit Malage

    2015-09-01

    Full Text Available Ductile cast iron is preferred as nodular iron or spheroidal graphite iron. Ductile cast iron contains graphite in form of discrete nodules and matrix of ferrite and perlite. In order to determine the mechanical properties, one needs to determine volume of phases in matrix and nodularity in the microstructure of metal sample. Manual methods available for this, are time consuming and accuracy depends on expertize. The paper proposes a novel method for automatic quantitative analysis of microstructure of Ferritic Pearlitic Ductile Iron which calculates volume of phases and nodularity of that sample. This gives results within a very short time (approximately 5 sec with 98% accuracy for volume phases of matrices and 90% of accuracy for nodule detection and analysis which are in the range of standard specified for SG 500/7 and validated by metallurgist.

  13. Digital Images Analysis

    OpenAIRE

    2012-01-01

    International audience; A specific field of image processing focuses on the evaluation of image quality and assessment of their authenticity. A loss of image quality may be due to the various processes by which it passes. In assessing the authenticity of the image we detect forgeries, detection of hidden messages, etc. In this work, we present an overview of these areas; these areas have in common the need to develop theories and techniques to detect changes in the image that it is not detect...

  14. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processi...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....... will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology...

  15. Rotation Covariant Image Processing for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Henrik Skibbe

    2013-01-01

    Full Text Available With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  16. Process perspective on image quality evaluation

    Science.gov (United States)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  17. Fuzzy image processing in sun sensor

    Science.gov (United States)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  18. Cosmic Infrared Background Fluctuations in Deep Spitzer IRAC Images: Data Processing and Analysis

    CERN Document Server

    Arendt, R G; Moseley, S H; Mather, J

    2009-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer IRAC observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations which would otherwise corrupt our results. The procedures and results for masking bright sources, and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large scale (>~30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power law components. Our measurements of spatial fluctuations ...

  19. Image processing using reconfigurable FPGAs

    Science.gov (United States)

    Ferguson, Lee

    1996-10-01

    The use of reconfigurable field-programmable gate arrays (FPGAs) for imaging applications show considerable promise to fill the gap that often occurs when digital signal processor chips fail to meet performance specifications. Single chip DSPs do not have the overall performance to meet the needs of many imaging applications, particularly in real-time designs. Using multiple DSPs to boost performance often presents major design challenges in maintaining data alignment and process synchronization. These challenges can impose serious cost, power consumption and board space penalties. Image processing requires manipulating massive amounts of data at high-speed. Although DSP chips can process data at high-speeds, their architectures can inhibit overall system performance in real-time imaging. The rate of operations can be increased when they are performed in dedicated hardware, such as special-purpose imaging devices and FPGAs, which provides the horsepower necessary to implement real-time image processing products successfully and cost-effectively. For many fixed applications, non-SRAM- based (antifuse or flash-based) FPGAs provide the raw speed to accomplish standard high-speed functions. However, in applications where algorithms are continuously changing and compute operations must be modified, only SRAM-based FPGAs give enough flexibility. The addition of reconfigurable FPGAs as a flexible hardware facility enables DSP chips to perform optimally. The benefits primarily stem from optimizing the hardware for the algorithms or the use of reconfigurable hardware to enhance the product architecture. And with SRAM-based FPGAs that are capable of partial dynamic reconfiguration, such as the Cache-Logic FPGAs from Atmel, continuous modification of data and logic is not only possible, it is practical as well. First we review the particular demands of image processing. Then we present various applications and discuss strategies for exploiting the capabilities of

  20. Morphology of Near- and Semispherical Melted Chips after the Grinding Processes Using Sol-Gel Abrasives Based on SEM-Imaging and Analysis

    Directory of Open Access Journals (Sweden)

    W. Kapłonek

    2016-01-01

    Full Text Available Selected issues related to SEM-imaging and image analysis of spherical melted chips formed during the grinding process are presented and discussed. The general characteristics of this specific group of machining products are given. Chip formation phenomena, as well as their overall morphology, are presented using selected examples of near- and semispherical melted chips occurring singly or concentrated in clusters on the grinding wheel surface after the machining process. Observation of the spherical melted chips and acquisition of their images were carried out for grinding wheel active surfaces with microcrystalline sintered corundum abrasive grains SG™ after the internal cylindrical grinding process of a 100Cr6 steel and Titanium Grade 2® alloy by use of a scanning electron microscope, JEOL JSM-5500LV. Analysis of the obtained SEM micrographs was carried out by Image-Pro® Plus 5.0 software to determine the selected geometrical parameters describing the morphological features of the assessed chips.

  1. Image processing of galaxy photographs

    Science.gov (United States)

    Arp, H.; Lorre, J.

    1976-01-01

    New computer techniques for analyzing and processing photographic images of galaxies are presented, with interesting scientific findings gleaned from the processed photographic data. Discovery and enhancement of very faint and low-contrast nebulous features, improved resolution of near-limit detail in nebulous and stellar images, and relative colors of a group of nebulosities in the field are attained by the methods. Digital algorithms, nonlinear pattern-recognition filters, linear convolution filters, plate averaging and contrast enhancement techniques, and an atmospheric deconvolution technique are described. New detail is revealed in images of NGC 7331, Stephan's Quintet, Seyfert's Sextet, and the jet in M87, via processes of addition of plates, star removal, contrast enhancement, standard deviation filtering, and computer ratioing to bring out qualitative color differences.

  2. Image Processing and its Military Applications

    Directory of Open Access Journals (Sweden)

    V. V.D. Shah

    1987-10-01

    Full Text Available One of the important breakthroughs, image processing is the stand alone, non-human image understanding system (IUS. The task of understanding images becomes monumental as one tries to define what understanding really is. Both pattern recognition and artificial intelligence are used in addition to traditional signal processing. Scene analysis procedures using edge and texture segmentation can be considered as the early stages of image understanding process. Symbolic representation and relationship grammers come at subsequent stages. Thus it is not reasonable to put a man into a loop of signal processing at certain sensors such as remotely piloted vehicles, satellites and spacecrafts. Consequently smart sensors and semi-automatic processes are being developed. Land remote sensing has been another important application of the image processing. With the introduction of programmes like Star Wars this particular application has gained a special importance from the Military's point of view. This paper provides an overview of digital image processing and explores the scope of the technology of remote sensing and IUSs from the Military's point of view. An example of the autonomous vehicle project now under progress in the US is described in detail to elucidate the impact of IUSs.

  3. Open framework for management and processing of multi-modality and multidimensional imaging data for analysis and modelling muscular function

    Science.gov (United States)

    García Juan, David; Delattre, Bénédicte M. A.; Trombella, Sara; Lynch, Sean; Becker, Matthias; Choi, Hon Fai; Ratib, Osman

    2014-03-01

    Musculoskeletal disorders (MSD) are becoming a big healthcare economical burden in developed countries with aging population. Classical methods like biopsy or EMG used in clinical practice for muscle assessment are invasive and not accurately sufficient for measurement of impairments of muscular performance. Non-invasive imaging techniques can nowadays provide effective alternatives for static and dynamic assessment of muscle function. In this paper we present work aimed toward the development of a generic data structure for handling n-dimensional metabolic and anatomical data acquired from hybrid PET/MR scanners. Special static and dynamic protocols were developed for assessment of physical and functional images of individual muscles of the lower limb. In an initial stage of the project a manual segmentation of selected muscles was performed on high-resolution 3D static images and subsequently interpolated to full dynamic set of contours from selected 2D dynamic images across different levels of the leg. This results in a full set of 4D data of lower limb muscles at rest and during exercise. These data can further be extended to a 5D data by adding metabolic data obtained from PET images. Our data structure and corresponding image processing extension allows for better evaluation of large volumes of multidimensional imaging data that are acquired and processed to generate dynamic models of the moving lower limb and its muscular function.

  4. Image processing system for digital chest X-ray images

    Energy Technology Data Exchange (ETDEWEB)

    Cocklin, M.; Gourlay, A.; Jackson, P.; Kaye, G.; Miessler, M. (I.B.M. U.K. Scientific Centre, Winchester (UK)); Kerr, I.; Lams, P. (Radiology Department, Brompton Hospital, London (UK))

    1984-01-01

    This paper investigates the requirements for image processing of digital chest X-ray images. These images are conventionally recorded on film and are characterised by large size, wide dynamic range and high resolution. X-ray detection systems are now becoming available for capturing these images directly in photoelectronic-digital form. The hardware and software facilities required for handling these images are described. These facilities include high resolution digital image displays, programmable video look up tables, image stores for image capture and processing and a full range of software tools for image manipulation. Examples are given of the applications of digital image processing techniques to this class of image.

  5. CMOS imagers from phototransduction to image processing

    CERN Document Server

    Etienne-Cummings, Ralph

    2004-01-01

    The idea of writing a book on CMOS imaging has been brewing for several years. It was placed on a fast track after we agreed to organize a tutorial on CMOS sensors for the 2004 IEEE International Symposium on Circuits and Systems (ISCAS 2004). This tutorial defined the structure of the book, but as first time authors/editors, we had a lot to learn about the logistics of putting together information from multiple sources. Needless to say, it was a long road between the tutorial and the book, and it took more than a few months to complete. We hope that you will find our journey worthwhile and the collated information useful. The laboratories of the authors are located at many universities distributed around the world. Their unifying theme, however, is the advancement of knowledge for the development of systems for CMOS imaging and image processing. We hope that this book will highlight the ideas that have been pioneered by the authors, while providing a roadmap for new practitioners in this field to exploit exc...

  6. Fingerprint recognition using image processing

    Science.gov (United States)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  7. A brief review of digital image processing

    Science.gov (United States)

    Billingsley, F. C.

    1975-01-01

    The review is presented with particular reference to Skylab S-192 and Landsat MSS imagery. Attention is given to rectification (calibration) processing with emphasis on geometric correction of image distortions. Image enhancement techniques (e.g., the use of high pass digital filters to eliminate gross shading to allow emphasis of the fine detail) are described along with data analysis and system considerations (software philosophy).

  8. Computer image processing: Geologic applications

    Science.gov (United States)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  9. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  10. GIPSY : Groningen Image Processing System

    NARCIS (Netherlands)

    Allen, R. J.; Ekers, R. D.; Terlouw, J. P.; Vogelaar, M. G. R.

    2011-01-01

    GIPSY is an acronym of Groningen Image Processing SYstem. It is a highly interactive software system for the reduction and display of astronomical data. It supports multi-tasking using a versatile user interface, it has an advanced data structure, a powerful script language and good display faciliti

  11. Concept Learning through Image Processing.

    Science.gov (United States)

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  12. Linear Algebra and Image Processing

    Science.gov (United States)

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  13. On Processing Hexagonally Sampled Images

    Science.gov (United States)

    2011-07-01

    Definition Addition Negation Subtraction Scalar Multiplication                  2121 2121 21 2 aacc aarr aa pp1...coordinate system for addressing a hexagonal grid that provides support for efficient image processing • Efficient ASA methods were shown for gradient

  14. Method for Assessment of Changes in the Width of Cracks in Cement Composites with Use of Computer Image Processing and Analysis

    Science.gov (United States)

    Tomczak, Kamil; Jakubowski, Jacek; Fiołek, Przemysław

    2017-06-01

    Crack width measurement is an important element of research on the progress of self-healing cement composites. Due to the nature of this research, the method of measuring the width of cracks and their changes over time must meet specific requirements. The article presents a novel method of measuring crack width based on images from a scanner with an optical resolution of 6400 dpi, subject to initial image processing in the ImageJ development environment and further processing and analysis of results. After registering a series of images of the cracks at different times using SIFT conversion (Scale-Invariant Feature Transform), a dense network of line segments is created in all images, intersecting the cracks perpendicular to the local axes. Along these line segments, brightness profiles are extracted, which are the basis for determination of crack width. The distribution and rotation of the line of intersection in a regular layout, automation of transformations, management of images and profiles of brightness, and data analysis to determine the width of cracks and their changes over time are made automatically by own code in the ImageJ and VBA environment. The article describes the method, tests on its properties, sources of measurement uncertainty. It also presents an example of application of the method in research on autogenous self-healing of concrete, specifically the ability to reduce a sample crack width and its full closure within 28 days of the self-healing process.

  15. Micromorphometrical analysis of rodent related (SPF) and unrelated (human) gut microbial flora in germfree mice by digital image processing

    NARCIS (Netherlands)

    Veenendaal, D.; Boer, J. de; Waaij, D. van der; Wilkinson, M.H.F.; Meijer, B.C

    Digital image processing (DIP) of bacterial smears is a new method of analysing the composition of the gut microbial flora. This method provides the opportunity to compare and evaluate differences in the complex highly concentrated anaerobic fraction of gut microbial flora, based on

  16. Micromorphometrical analysis of rodent related (SPF) and unrelated (human) gut microbial flora in germfree mice by digital image processing

    NARCIS (Netherlands)

    Veenendaal, D.; Boer, J. de; Waaij, D. van der; Wilkinson, M.H.F.; Meijer, B.C

    1996-01-01

    Digital image processing (DIP) of bacterial smears is a new method of analysing the composition of the gut microbial flora. This method provides the opportunity to compare and evaluate differences in the complex highly concentrated anaerobic fraction of gut microbial flora, based on micromorphologic

  17. Heuristic Analysis Model of Nitrided Layers' Formation Consisting of the Image Processing and Analysis and Elements of Artificial Intelligence

    National Research Council Canada - National Science Library

    Tomasz Wójcicki; Michal Nowicki

    2016-01-01

    .... The objectives of the analyses of the materials for gas nitriding technology are described. The methods of the preparation of nitrided layers, the steps of the process and the construction and operation of devices for gas nitriding are given...

  18. Research on pavement crack recognition methods based on image processing

    Science.gov (United States)

    Cai, Yingchun; Zhang, Yamin

    2011-06-01

    In order to overview and analysis briefly pavement crack recognition methods , then find the current existing problems in pavement crack image processing, the popular methods of crack image processing such as neural network method, morphology method, fuzzy logic method and traditional image processing .etc. are discussed, and some effective solutions to those problems are presented.

  19. Quantification of pressure sensitive adhesive, residual ink, and other colored process contaminants using dye and color image analysis

    Science.gov (United States)

    Roy R. Rosenberger; Carl J. Houtman

    2000-01-01

    The USPS Image Analysis (IA) protocol recommends the use of hydrophobic dyes to develop contrast between pressure sensitive adhesive (PSA) particles and cellulosic fibers before using a dirt counter to detect all contaminants that have contrast with the handsheet background. Unless the sample contains no contaminants other than those of interest, two measurement steps...

  20. FITSH -- a software package for image processing

    CERN Document Server

    Pál, András

    2011-01-01

    In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (incl. image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. This set of utilities found in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently us...

  1. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Aarti

    2014-05-01

    Full Text Available This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse noise and the additive white Gaussian noise and blurredness are the types of noise that occur during transmission and capturing. For denoising the image there are some algorithms which denoise the image.

  2. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Aarti Kumari

    2015-11-01

    Full Text Available This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse noise and the additive white Gaussian noise and blurredness are the types of noise that occur during transmission and capturing. For denoising the image there are some algorithms which denoise the image.

  3. Flightspeed Integral Image Analysis Toolkit

    Science.gov (United States)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  4. Turbine Blade Image Processing System

    Science.gov (United States)

    Page, Neal S.; Snyder, Wesley E.; Rajala, Sarah A.

    1983-10-01

    A vision system has been developed at North Carolina State University to identify the orientation and three dimensional location of steam turbine blades that are stacked in an industrial A-frame cart. The system uses a controlled light source for structured illumination and a single camera to extract the information required by the image processing software to calculate the position and orientation of a turbine blade in real time.

  5. Implementation Aspects of Image Processing

    OpenAIRE

    Nordlöv, Per

    2001-01-01

    This Master's Thesis discusses the different trade-offs a programmer needs to consider when constructing image processing systems. First, an overview of the different alternatives available is given followed by a focus on systems based on general hardware. General, in this case, means mass-market with a low price-performance-ratio. The software environment is focused on UNIX, sometimes restricted to Linux, together with C, C++ and ANSI-standardized APIs.

  6. Imaging spectroscopy for scene analysis

    CERN Document Server

    Robles-Kelly, Antonio

    2012-01-01

    This book presents a detailed analysis of spectral imaging, describing how it can be used for the purposes of material identification, object recognition and scene understanding. The opportunities and challenges of combining spatial and spectral information are explored in depth, as are a wide range of applications. Features: discusses spectral image acquisition by hyperspectral cameras, and the process of spectral image formation; examines models of surface reflectance, the recovery of photometric invariants, and the estimation of the illuminant power spectrum from spectral imagery; describes

  7. General logarithmic image processing convolution.

    Science.gov (United States)

    Palomares, Jose M; González, Jesús; Ros, Eduardo; Prieto, Alberto

    2006-11-01

    The logarithmic image processing model (LIP) is a robust mathematical framework, which, among other benefits, behaves invariantly to illumination changes. This paper presents, for the first time, two general formulations of the 2-D convolution of separable kernels under the LIP paradigm. Although both formulations are mathematically equivalent, one of them has been designed avoiding the operations which are computationally expensive in current computers. Therefore, this fast LIP convolution method allows to obtain significant speedups and is more adequate for real-time processing. In order to support these statements, some experimental results are shown in Section V.

  8. Kinetic Analysis of Dynamic Positron Emission Tomography Data using Open-Source Image Processing and Statistical Inference Tools.

    Science.gov (United States)

    Hawe, David; Hernández Fernández, Francisco R; O'Suilleabháin, Liam; Huang, Jian; Wolsztynski, Eric; O'Sullivan, Finbarr

    2012-05-01

    In dynamic mode, positron emission tomography (PET) can be used to track the evolution of injected radio-labelled molecules in living tissue. This is a powerful diagnostic imaging technique that provides a unique opportunity to probe the status of healthy and pathological tissue by examining how it processes substrates. The spatial aspect of PET is well established in the computational statistics literature. This article focuses on its temporal aspect. The interpretation of PET time-course data is complicated because the measured signal is a combination of vascular delivery and tissue retention effects. If the arterial time-course is known, the tissue time-course can typically be expressed in terms of a linear convolution between the arterial time-course and the tissue residue. In statistical terms, the residue function is essentially a survival function - a familiar life-time data construct. Kinetic analysis of PET data is concerned with estimation of the residue and associated functionals such as flow, flux, volume of distribution and transit time summaries. This review emphasises a nonparametric approach to the estimation of the residue based on a piecewise linear form. Rapid implementation of this by quadratic programming is described. The approach provides a reference for statistical assessment of widely used one- and two-compartmental model forms. We illustrate the method with data from two of the most well-established PET radiotracers, (15)O-H(2)O and (18)F-fluorodeoxyglucose, used for assessment of blood perfusion and glucose metabolism respectively. The presentation illustrates the use of two open-source tools, AMIDE and R, for PET scan manipulation and model inference.

  9. Kinetic Analysis of Dynamic Positron Emission Tomography Data using Open-Source Image Processing and Statistical Inference Tools

    OpenAIRE

    Hawe, David; Hernández Fernández, Francisco R.; O’Suilleabháin, Liam; Huang, Jian; Wolsztynski, Eric; O’Sullivan, Finbarr

    2012-01-01

    In dynamic mode, positron emission tomography (PET) can be used to track the evolution of injected radio-labelled molecules in living tissue. This is a powerful diagnostic imaging technique that provides a unique opportunity to probe the status of healthy and pathological tissue by examining how it processes substrates. The spatial aspect of PET is well established in the computational statistics literature. This article focuses on its temporal aspect. The interpretation of PET time-course da...

  10. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  11. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively.

  12. Multivariate image analysis in biomedicine.

    Science.gov (United States)

    Nattkemper, Tim W

    2004-10-01

    In recent years, multivariate imaging techniques are developed and applied in biomedical research in an increasing degree. In research projects and in clinical studies as well m-dimensional multivariate images (MVI) are recorded and stored to databases for a subsequent analysis. The complexity of the m-dimensional data and the growing number of high throughput applications call for new strategies for the application of image processing and data mining to support the direct interactive analysis by human experts. This article provides an overview of proposed approaches for MVI analysis in biomedicine. After summarizing the biomedical MVI techniques the two level framework for MVI analysis is illustrated. Following this framework, the state-of-the-art solutions from the fields of image processing and data mining are reviewed and discussed. Motivations for MVI data mining in biology and medicine are characterized, followed by an overview of graphical and auditory approaches for interactive data exploration. The paper concludes with summarizing open problems in MVI analysis and remarks upon the future development of biomedical MVI analysis.

  13. Secondary Ion Mass Spectrometry Imaging of Molecular Distributions in Cultured Neurons and Their Processes: Comparative Analysis of Sample Preparation

    Science.gov (United States)

    Tucker, Kevin R.; Li, Zhen; Rubakhin, Stanislav S.; Sweedler, Jonathan V.

    2012-11-01

    Neurons often exhibit a complex chemical distribution and topography; therefore, sample preparation protocols that preserve structures ranging from relatively large cell somata to small neurites and growth cones are important factors in secondary ion mass spectrometry (SIMS) imaging studies. Here, SIMS was used to investigate the subcellular localization of lipids and lipophilic species in neurons from Aplysia californica. Using individual neurons cultured on silicon wafers, we compared and optimized several SIMS sampling approaches. After an initial step to remove the high salt culturing media, formaldehyde, paraformaldehyde, and glycerol, and various combinations thereof, were tested for their ability to achieve cell stabilization during and after the removal of extracellular media. These treatments improved the preservation of cellular morphology as visualized with SIMS imaging. For analytes >250 Da, coating the cell surface with a 3.2 nm-thick gold layer increased the ion intensity; multiple analytes previously not observed or observed at low abundance were detected, including intact cholesterol and vitamin E molecular ions. However, once a sample was coated, many of the lower molecular mass (cell stabilization with glycerol and 4 % paraformaldehyde. The sample preparation methods described here enhance SIMS imaging of processes of individual cultured neurons over a broad mass range with enhanced image contrast.

  14. Mariner 9-Image processing and products

    Science.gov (United States)

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  15. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu

    2014-08-01

    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  16. Digital image processing for information extraction.

    Science.gov (United States)

    Billingsley, F. C.

    1973-01-01

    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  17. Review of Biomedical Image Processing

    Directory of Open Access Journals (Sweden)

    Ciaccio Edward J

    2011-11-01

    Full Text Available Abstract This article is a review of the book: 'Biomedical Image Processing', by Thomas M. Deserno, which is published by Springer-Verlag. Salient information that will be useful to decide whether the book is relevant to topics of interest to the reader, and whether it might be suitable as a course textbook, are presented in the review. This includes information about the book details, a summary, the suitability of the text in course and research work, the framework of the book, its specific content, and conclusions.

  18. Multispectral Image Processing for Plants

    Science.gov (United States)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  19. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  20. FLIPS: Friendly Lisp Image Processing System

    Science.gov (United States)

    Gee, Shirley J.

    1991-08-01

    The Friendly Lisp Image Processing System (FLIPS) is the interface to Advanced Target Detection (ATD), a multi-resolutional image analysis system developed by Hughes in conjunction with the Hughes Research Laboratories. Both menu- and graphics-driven, FLIPS enhances system usability by supporting the interactive nature of research and development. Although much progress has been made, fully automated image understanding technology that is both robust and reliable is not a reality. In situations where highly accurate results are required, skilled human analysts must still verify the findings of these systems. Furthermore, the systems often require processing times several orders of magnitude greater than that needed by veteran personnel to analyze the same image. The purpose of FLIPS is to facilitate the ability of an image analyst to take statistical measurements on digital imagery in a timely fashion, a capability critical in research environments where a large percentage of time is expended in algorithm development. In many cases, this entails minor modifications or code tinkering. Without a well-developed man-machine interface, throughput is unduly constricted. FLIPS provides mechanisms which support rapid prototyping for ATD. This paper examines the ATD/FLIPS system. The philosophy of ATD in addressing image understanding problems is described, and the capabilities of FLIPS are discussed, along with a description of the interaction between ATD and FLIPS. Finally, an overview of current plans for the system is outlined.

  1. Fractal methods in image analysis and coding

    OpenAIRE

    Neary, David

    2001-01-01

    In this thesis we present an overview of image processing techniques which use fractal methods in some way. We show how these fields relate to each other, and examine various aspects of fractal methods in each area. The three principal fields of image processing and analysis th a t we examine are texture classification, image segmentation and image coding. In the area of texture classification, we examine fractal dimension estimators, comparing these methods to other methods in use, a...

  2. Principles and clinical applications of image analysis.

    Science.gov (United States)

    Kisner, H J

    1988-12-01

    Image processing has traveled to the lunar surface and back, finding its way into the clinical laboratory. Advances in digital computers have improved the technology of image analysis, resulting in a wide variety of medical applications. Offering improvements in turnaround time, standardized systems, increased precision, and walkaway automation, digital image analysis has likely found a permanent home as a diagnostic aid in the interpretation of microscopic as well as macroscopic laboratory images.

  3. Performance analysis of image processing algorithms for classification of natural vegetation in the mountains of southern California

    Science.gov (United States)

    Yool, S. R.; Star, J. L.; Estes, J. E.; Botkin, D. B.; Eckhardt, D. W.

    1986-01-01

    The earth's forests fix carbon from the atmosphere during photosynthesis. Scientists are concerned that massive forest removals may promote an increase in atmospheric carbon dioxide, with possible global warming and related environmental effects. Space-based remote sensing may enable the production of accurate world forest maps needed to examine this concern objectively. To test the limits of remote sensing for large-area forest mapping, we use Landsat data acquired over a site in the forested mountains of southern California to examine the relative capacities of a variety of popular image processing algorithms to discriminate different forest types. Results indicate that certain algorithms are best suited to forest classification. Differences in performance between the algorithms tested appear related to variations in their sensitivities to spectral variations caused by background reflectance, differential illumination, and spatial pattern by species. Results emphasize the complexity between the land-cover regime, remotely sensed data and the algorithms used to process these data.

  4. Viewpoints on Medical Image Processing: From Science to Application

    Science.gov (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  5. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  6. Wavelet-aided pavement distress image processing

    Science.gov (United States)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2003-11-01

    A wavelet-based pavement distress detection and evaluation method is proposed. This method consists of two main parts, real-time processing for distress detection and offline processing for distress evaluation. The real-time processing part includes wavelet transform, distress detection and isolation, and image compression and noise reduction. When a pavement image is decomposed into different frequency subbands by wavelet transform, the distresses, which are usually irregular in shape, appear as high-amplitude wavelet coefficients in the high-frequency details subbands, while the background appears in the low-frequency approximation subband. Two statistical parameters, high-amplitude wavelet coefficient percentage (HAWCP) and high-frequency energy percentage (HFEP), are established and used as criteria for real-time distress detection and distress image isolation. For compression of isolated distress images, a modified EZW (Embedded Zerotrees of Wavelet coding) is developed, which can simultaneously compress the images and reduce the noise. The compressed data are saved to the hard drive for further analysis and evaluation. The offline processing includes distress classification, distress quantification, and reconstruction of the original image for distress segmentation, distress mapping, and maintenance decision-making. The compressed data are first loaded and decoded to obtain wavelet coefficients. Then Radon transform is then applied and the parameters related to the peaks in the Radon domain are used for distress classification. For distress quantification, a norm is defined that can be used as an index for evaluating the severity and extent of the distress. Compared to visual or manual inspection, the proposed method has the advantages of being objective, high-speed, safe, automated, and applicable to different types of pavements and distresses.

  7. Web Based Distributed Coastal Image Analysis System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  8. UV image processing to detect diffuse clouds

    Science.gov (United States)

    Armengot, M.; Gómez de Castro, A. I.; López-Santiago, J.; Sánchez-Doreste, N.

    2015-05-01

    The presence of diffuse clouds along the Galaxy is under consideration as far as they are related to stellar formation and their physical properties are not well understood. The signal received from most of these structures in the UV images is minimal compared to the point sources. The presence of noise in these images makes hard the analysis because the Signal-to-Noise ratio is proportionally much higher in these areas. However, the digital processing of the images shows that it is possible to enhance and target these clouds. Typically, this kind of treatment is done on purpose for specific research areas and the Astrophysicist's work depends on the computer tools and its possibilities for enhancing a particular area based on a prior knowledge. Automating this step is the goal of our work to make easier the study of these structures in UV images. In particular we have used the GALEX survey images in the aim of learning to automatically detect such clouds and be able of unsupervised detection and graphic enhancement to log them. Our experiments show the existence of some evidences in the UV images that allow the systematic computing and open the chance to generalize the algorithm to find these structures in universe areas where they have not been recorded yet.

  9. NASA Hazard Analysis Process

    Science.gov (United States)

    Deckert, George

    2010-01-01

    This viewgraph presentation reviews The NASA Hazard Analysis process. The contents include: 1) Significant Incidents and Close Calls in Human Spaceflight; 2) Subsystem Safety Engineering Through the Project Life Cycle; 3) The Risk Informed Design Process; 4) Types of NASA Hazard Analysis; 5) Preliminary Hazard Analysis (PHA); 6) Hazard Analysis Process; 7) Identify Hazardous Conditions; 8) Consider All Interfaces; 9) Work a Preliminary Hazard List; 10) NASA Generic Hazards List; and 11) Final Thoughts

  10. Image Post-Processing in Dental Practice

    OpenAIRE

    Gormez, Ozlem; Yilmaz, Hasan Huseyin

    2009-01-01

    Image post-processing of dental digital radiographs, a function which used commonly in dental practice is presented in this article. Digital radiography has been available in dentistry for more than 25 years and its use by dental practitioners is steadily increasing. Digital acquisition of radiographs enables computer-based image post-processing to enhance image quality and increase the accuracy of interpretation. Image post-processing applications can easily be practiced in dental office by ...

  11. Interactive Image Processing demonstrations for the web

    OpenAIRE

    Tella Amo, Marcel

    2011-01-01

    The main goal in this project is to improve the way how image processing developers can test their algorithms, and show them to other people to demonstrate their performance. This diploma thesis aims to provide a framework for developing web applications for ImagePlus, the software develpment platform in C++ of the Image Processing Group of the Technical University of Catalonia (UPC). These web applications are to demonstrate the functionality of the image processing algorithms to any ...

  12. Conflict processing in the rat brain: behavioral analysis and functional µPET imaging using [18F]fluorodeoxyglucose

    Directory of Open Access Journals (Sweden)

    Christine eMarx

    2012-02-01

    Full Text Available Conflicts in spatial stimulus-response tasks occur when the task-relevant feature of a stimulus implies a response towards a certain location which does not match the location of stimulus presentation. This conflict leads to increased error rates and longer reaction times, which has been termed Simon effect. A model of dual-route processing (automatic and intentional of stimulus features has been proposed, predicting response conflicts if the two routes are incongruent. Although there is evidence that the prefrontal cortex, notably the anterior cingulate cortex, plays a crucial role in conflict processing, the neuronal basis of dual-route architecture is still unknown. In this study, we pursue a novel approach using positron emission tomography (PET to identify relevant brain areas in a rat model of an auditory Simon task, a neuropsychological interference task, which is commonly used to study conflict processing in humans. For combination with PET we used the metabolic tracer [18F]fluorodeoxyglucose, which accumulates in metabolically active brain cells during the behavioral task. Brain areas involved in conflict processing are supposed to be activated when automatic and intentional route processing lead to different responses (dual route model. Analysis of PET data revealed specific activation patterns for different task settings applicable to the dual route model as established for response conflict processing. The rat motor cortex (M1 may be part of the automatic route or involved in its facilitation, while premotor (M2, prelimbic (PLC and anterior cingulate cortex (ACC seemed to be essential for inhibiting the incorrect, automatic response, indicating conflict monitoring functions. Our findings and the remarkable similarities to the pattern of activated regions reported during conflict processing in humans demonstrate that our rodent model opens novel opportunities to investigate the anatomical basis of conflict processing and dual

  13. Conflict Processing in the Rat Brain: Behavioral Analysis and Functional μPET Imaging Using [F]Fluorodeoxyglucose.

    Science.gov (United States)

    Marx, Christine; Lex, Björn; Calaminus, Carsten; Hauber, Wolfgang; Backes, Heiko; Neumaier, Bernd; Mies, Günter; Graf, Rudolf; Endepols, Heike

    2012-01-01

    Conflicts in spatial stimulus-response tasks occur when the task-relevant feature of a stimulus implies a response toward a certain location which does not match the location of stimulus presentation. This conflict leads to increased error rates and longer reaction times, which has been termed Simon effect. A model of dual route processing (automatic and intentional) of stimulus features has been proposed, predicting response conflicts if the two routes are incongruent. Although there is evidence that the prefrontal cortex, notably the anterior cingulate cortex (ACC), plays a crucial role in conflict processing, the neuronal basis of dual route architecture is still unknown. In this study, we pursue a novel approach using positron emission tomography (PET) to identify relevant brain areas in a rat model of an auditory Simon task, a neuropsychological interference task, which is commonly used to study conflict processing in humans. For combination with PET we used the metabolic tracer [(18)F]fluorodeoxyglucose, which accumulates in metabolically active brain cells during the behavioral task. Brain areas involved in conflict processing are supposed to be activated when automatic and intentional route processing lead to different responses (dual route model). Analysis of PET data revealed specific activation patterns for different task settings applicable to the dual route model as established for response conflict processing. The rat motor cortex (M1) may be part of the automatic route or involved in its facilitation, while premotor (M2), prelimbic, and ACC seemed to be essential for inhibiting the incorrect, automatic response, indicating conflict monitoring functions. Our findings and the remarkable similarities to the pattern of activated regions reported during conflict processing in humans demonstrate that our rodent model opens novel opportunities to investigate the anatomical basis of conflict processing and dual route architecture.

  14. Sorting Olive Batches for the Milling Process Using Image Processing

    Directory of Open Access Journals (Sweden)

    Daniel Aguilera Puerto

    2015-07-01

    Full Text Available The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco. The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results.

  15. Determination of the apparent porosity level of refractory concrete during a sintering process using an ultrasonic pulse velocity technique and image analysis

    Directory of Open Access Journals (Sweden)

    LJUBICA M. PAVLOVIĆ

    2010-03-01

    Full Text Available Concrete which undergoes a thermal treatment before (pre-casted concrete blocks and during (concrete embedded in-situ its life-service can be applied in plants operating at high temperature and as thermal insulation. Sintering is a process which occurs within a concrete structure in such conditions. Progression of sintering process can be monitored by the change of the porosity parameters determined with a nondestructive test method - ultrasonic pulse velocity and computer program for image analysis. The experiment has been performed on the samples of corundum and bauxite concrete composites. The apparent porosity of the samples thermally treated at 110, 800, 1000, 1300 and 1500 C was primary investigated with a standard laboratory procedure. Sintering parameters were calculated from the creep testing. The loss of strength and material degradation occurred in concrete when it was subjected to the increased temperature and a compressive load. Mechanical properties indicate and monitor changes within microstructure. The level of surface deterioration after the thermal treatment was determined using Image Pro Plus program. Mechanical strength was estimated using ultrasonic pulse velocity testing. Nondestructive ultrasonic mea¬surement was used as a qualitative description of the porosity change in specimens which is the result of the sintering process. The ultrasonic pulse velocity technique and image analysis proved to be reliable methods for monitoring of micro-structural change during the thermal treatment and service life of refractory concrete.

  16. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  17. Combining image-processing and image compression schemes

    Science.gov (United States)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  18. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  19. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  20. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    Science.gov (United States)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  1. An overview of medical image processing methods

    African Journals Online (AJOL)

    USER

    2010-06-14

    Jun 14, 2010 ... theoretical subjects about methods and algorithms used are explained. In the forth section, ... image processing techniques such as image segmentation, compression .... A convolution mask like -1 | 0 | 1 could be used in each.

  2. Fragmentation measurement using image processing

    Directory of Open Access Journals (Sweden)

    Farhang Sereshki

    2016-12-01

    Full Text Available In this research, first of all, the existing problems in fragmentation measurement are reviewed for the sake of its fast and reliable evaluation. Then, the available methods used for evaluation of blast results are mentioned. The produced errors especially in recognizing the rock fragments in computer-aided methods, and also, the importance of determination of their sizes in the image analysis methods are described. After reviewing the previous work done, an algorithm is proposed for the automated determination of rock particles’ boundary in the Matlab software. This method can determinate automatically the particles boundary in the minimum time. The results of proposed method are compared with those of Split Desktop and GoldSize software in two automated and manual states. Comparing the curves extracted from different methods reveals that the proposed approach is accurately applicable in measuring the size distribution of laboratory samples, while the manual determination of boundaries in the conventional software is very time-consuming, and the results of automated netting of fragments are very different with the real value due to the error in separation of the objects.

  3. Programmable remapper for image processing

    Science.gov (United States)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  4. Study of Image Processing, Enhancement and Restoration

    Directory of Open Access Journals (Sweden)

    Bhausaheb Shivajirao Shinde

    2011-11-01

    Full Text Available Digital image processing is a means by which the valuable information in observed raw image data can be revealed. A web-based image processing pipeline was created under the ambitious educational program Venus Transit 2004 (VT-2004. The active participants in the VT-2004 can apply the basic processing methods to the images obtained by their amateur telescopes and/or they can process an image observed at any observatory involved in the project. The processed result image is displayed immediately on the display. Above that all participants can follow the distance Sun-Venus centers computation performed at the professional observatory in the real time. There is a possibility to submit an image from their own observation into the database. It will be used for the distance Earth-Sun computation.

  5. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    as a double blinded study. The result of the pre-clinical trialmotivated for a larger scale clinical trial. Each of the two clinical trials were performed in collaboration with Copenhagen University Hospital, Rigshospitalet, and Copenhagen University, Department of Biostatistic. Evaluations were performed...... by medical doctors and experts in ultrasound, using the developed Image Quality assessment program (IQap). The study concludes that the image quality in terms of spatial resolution, contrast and unwanted artifacts is statistically better using SASB imaging than conventional imaging. The third and final...

  6. Integrated Process Capability Analysis

    Institute of Scientific and Technical Information of China (English)

    Chen; H; T; Huang; M; L; Hung; Y; H; Chen; K; S

    2002-01-01

    Process Capability Analysis (PCA) is a powerful too l to assess the ability of a process for manufacturing product that meets specific ations. The larger process capability index implies the higher process yield, a nd the larger process capability index also indicates the lower process expected loss. Chen et al. (2001) has applied indices C pu, C pl, and C pk for evaluating the process capability for a multi-process product wi th smaller-the-better, larger-the-better, and nominal-the-best spec...

  7. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis.

  8. Retinal image analysis: preprocessing and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Marrugo, Andres G; Millan, Maria S, E-mail: andres.marrugo@upc.edu [Grup d' Optica Aplicada i Processament d' Imatge, Departament d' Optica i Optometria Univesitat Politecnica de Catalunya (Spain)

    2011-01-01

    Image processing, analysis and computer vision techniques are found today in all fields of medical science. These techniques are especially relevant to modern ophthalmology, a field heavily dependent on visual data. Retinal images are widely used for diagnostic purposes by ophthalmologists. However, these images often need visual enhancement prior to apply a digital analysis for pathological risk or damage detection. In this work we propose the use of an image enhancement technique for the compensation of non-uniform contrast and luminosity distribution in retinal images. We also explore optic nerve head segmentation by means of color mathematical morphology and the use of active contours.

  9. Visualization and processing of images in nano-resolution

    Science.gov (United States)

    Vozenilek, Vit; Pour, Tomas

    2017-02-01

    The paper aims to apply the methods of image processing which are widely used in Earth remote sensing for processing and visualization of images in nano-resolution because most of these images are currently analyzed only by an expert researcher without proper statistical background. Nano-resolution level may range from a resolution in picometres to the resolution of a light microscope that may be up to about 200 nanometers. Images in nano-resolution play an essential role in physics, medicine, and chemistry. Three case studies demonstrate different image visualization and image analysis approaches for different scales at the nano-resolution level. The results of case studies prove the suitability and applicability of Earth remote sensing methods for image visualization and processing for the nanoresolution level. It even opens new dimensions for spatial analysis at such an extreme spatial detail.

  10. Coordination in serial-parallel image processing

    Science.gov (United States)

    Wójcik, Waldemar; Dubovoi, Vladymyr M.; Duda, Marina E.; Romaniuk, Ryszard S.; Yesmakhanova, Laura; Kozbakova, Ainur

    2015-12-01

    Serial-parallel systems used to convert the image. The control of their work results with the need to solve coordination problem. The paper summarizes the model of coordination of resource allocation in relation to the task of synchronizing parallel processes; the genetic algorithm of coordination developed, its adequacy verified in relation to the process of parallel image processing.

  11. Natural user interfaces in medical image analysis cognitive analysis of brain and carotid artery images

    CERN Document Server

    Ogiela, Marek R

    2014-01-01

    This unique text/reference highlights a selection of practical applications of advanced image analysis methods for medical images. The book covers the complete methodology for processing, analysing and interpreting diagnostic results of sample CT images. The text also presents significant problems related to new approaches and paradigms in image understanding and semantic image analysis. To further engage the reader, example source code is provided for the implemented algorithms in the described solutions. Features: describes the most important methods and algorithms used for image analysis; e

  12. Digital signal processing techniques and applications in radar image processing

    CERN Document Server

    Wang, Bu-Chin

    2008-01-01

    A self-contained approach to DSP techniques and applications in radar imagingThe processing of radar images, in general, consists of three major fields: Digital Signal Processing (DSP); antenna and radar operation; and algorithms used to process the radar images. This book brings together material from these different areas to allow readers to gain a thorough understanding of how radar images are processed.The book is divided into three main parts and covers:* DSP principles and signal characteristics in both analog and digital domains, advanced signal sampling, and

  13. Computers in Public Schools: Changing the Image with Image Processing.

    Science.gov (United States)

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  14. Fuzzy Methods and Image Fusion in a Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Jaroslav Vlach

    2012-01-01

    Full Text Available Although the basics of image processing were laid more than 50 years ago, significant development occurred mainly in the last 25 years with the entrance of personal computers and today's problems are already very sophisticated and quick. This article is a contribution to the study of the use of fuzzy logic methods and image fusion for image processing using LabVIEW tools for quality management, in this case especially in the jewelry industry.  

  15. A Review Paper : Noise Models in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Ajay Kumar Boyat

    2015-04-01

    Full Text Available Noise is always presents in digital images during image acquisition, coding, transmission, and processing steps. Noise is very difficult to remove it from the digital images without the prior knowledge of noise model. That is why, review of noise models are essential in the study of image denoising techniques. In this paper, we express a brief overview of various noise models. These noise models can be selected by analysis of their origin. In this way, we present a complete and quantitative analysis of noise models available in digital images.

  16. Image quality dependence on image processing software in computed radiography

    Directory of Open Access Journals (Sweden)

    Lourens Jochemus Strauss

    2012-06-01

    Full Text Available Background. Image post-processing gives computed radiography (CR a considerable advantage over film-screen systems. After digitisation of information from CR plates, data are routinely processed using manufacturer-specific software. Agfa CR readers use MUSICA software, and an upgrade with significantly different image appearance was recently released: MUSICA2. Aim. This study quantitatively compares the image quality of images acquired without post-processing (flatfield with images processed using these two software packages. Methods. Four aspects of image quality were evaluated. An aluminium step-wedge was imaged using constant mA at tube voltages varying from 40 to 117kV. Signal-to-noise ratios (SNRs and contrast-to-noise Ratios (CNRs were calculated from all steps. Contrast variation with object size was evaluated with visual assessment of images of a Perspex contrast-detail phantom, and an image quality figure (IQF was calculated. Resolution was assessed using modulation transfer functions (MTFs. Results. SNRs for MUSICA2 were generally higher than the other two methods. The CNRs were comparable between the two software versions, although MUSICA2 had slightly higher values at lower kV. The flatfield CNR values were better than those for the processed images. All images showed a decrease in CNRs with tube voltage. The contrast-detail measurements showed that both MUSICA programmes improved the contrast of smaller objects. MUSICA2 was found to give the lowest (best IQF; MTF measurements confirmed this, with values at 3.5 lp/mm of 10% for MUSICA2, 8% for MUSICA and 5% for flatfield. Conclusion. Both MUSICA software packages produced images with better contrast resolution than unprocessed images. MUSICA2 has slightly improved image quality than MUSICA.

  17. Facial Edema Evaluation Using Digital Image Processing

    Directory of Open Access Journals (Sweden)

    A. E. Villafuerte-Nuñez

    2013-01-01

    Full Text Available The main objective of the facial edema evaluation is providing the needed information to determine the effectiveness of the anti-inflammatory drugs in development. This paper presents a system that measures the four main variables present in facial edemas: trismus, blush (coloration, temperature, and inflammation. Measurements are obtained by using image processing and the combination of different devices such as a projector, a PC, a digital camera, a thermographic camera, and a cephalostat. Data analysis and processing are performed using MATLAB. Facial inflammation is measured by comparing three-dimensional reconstructions of inflammatory variations using the fringe projection technique. Trismus is measured by converting pixels to centimeters in a digitally obtained image of an open mouth. Blushing changes are measured by obtaining and comparing the RGB histograms from facial edema images at different times. Finally, temperature changes are measured using a thermographic camera. Some tests using controlled measurements of every variable are presented in this paper. The results allow evaluating the measurement system before its use in a real test, using the pain model approved by the US Food and Drug Administration (FDA, which consists in extracting the third molar to generate the facial edema.

  18. Tropical forest phenology and metabolism: Integrated analysis of tower-mounted camera images and tower derived GPP for interpreting ecosystem scale processes

    Science.gov (United States)

    Wu, J.; Restrepo-Coupe, N.; Hayek, M.; Stark, S. C.; Smith, M.; Wiedemann, K.; Marostica, S.; Ferreira, M.; Woodcock, T.; Prohaska, N.; da Silva, R.; Nelson, B. W.; Huete, A. R.; Saleska, S. R.

    2013-12-01

    Seasonal and interannual patterns of leaf development and metabolism are a central topic of global change ecology. However, the seasonality of leaf development in tropical forests remains poorly understood due to the relatively low variation in climate, the high biodiversity of tropical biomes and the limitations of current observation techniques. In this study, we aim to demonstrate the feasibility of using near-surface remote sensing techniques to understand the phenology of an evergreen tropical forest (Tapajos National Forest or TNF site, Santarem, Para, Brazil), and how this phenology affects the metabolism of tropical vegetation. Two continuous years (2010-2011) of daily images from a tower mounted three-channel (red, green, and near-infrared) TetraCAM ADC camera were analyzed for this study. A new approach was developed based on an automatic image classification scheme which decomposed the images into two components (leaves and bare wood) to extract seasonality of leaf development. A confusion matrix method was used to assess the accuracy of image classification. MODIS EVI composites (MOD13Q1) were also acquired and processed for the TNF site (5km*5km). The camera based phenology information was first compared with MODIS EVI, and then combined with tower based eddy covariance measurements at the same site to quantify the effect of canopy-scale phenology on ecosystem metabolism. We found that: (1) Tower-based images revealed a clear seasonal pattern in leaf phenology that was supported by confusion matrix analysis. Matrix analysis gave a 96.7% user accuracy (user accuracy represents the probability that an image pixel classification actually corresponds to that category on the ground) for the leaf component, based on 24 images in 2010 (2 images per month). The tower-based pattern matched that retrieved from satellites (camera-sensed leaf phenology vs monthly MODIS EVI (01/2010-12/2011, R2=0.57, P-valueproduction was extracted by applying a first derivative of

  19. Badge Office Process Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Haurykiewicz, John Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dinehart, Timothy Grant [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parker, Robert Young [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-12

    The purpose of this process analysis was to analyze the Badge Offices’ current processes from a systems perspective and consider ways of pursuing objectives set forth by SEC-PS, namely increased customer flow (throughput) and reduced customer wait times. Information for the analysis was gathered for the project primarily through Badge Office Subject Matter Experts (SMEs), and in-person observation of prevailing processes. Using the information gathered, a process simulation model was constructed to represent current operations and allow assessment of potential process changes relative to factors mentioned previously. The overall purpose of the analysis was to provide SEC-PS management with information and recommendations to serve as a basis for additional focused study and areas for potential process improvements in the future.

  20. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang

    2014-01-01

    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  1. Pocket pumped image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kotov, I.V., E-mail: kotov@bnl.gov [Brookhaven National Laboratory, Upton, NY 11973 (United States); O' Connor, P. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Murray, N. [Centre for Electronic Imaging, Open University, Milton Keynes, MK7 6AA (United Kingdom)

    2015-07-01

    The pocket pumping technique is used to detect small electron trap sites. These traps, if present, degrade CCD charge transfer efficiency. To reveal traps in the active area, a CCD is illuminated with a flat field and, before image is read out, accumulated charges are moved back and forth number of times in parallel direction. As charges are moved over a trap, an electron is removed from the original pocket and re-emitted in the following pocket. As process repeats one pocket gets depleted and the neighboring pocket gets excess of charges. As a result a “dipole” signal appears on the otherwise flat background level. The amplitude of the dipole signal depends on the trap pumping efficiency. This paper is focused on trap identification technique and particularly on new methods developed for this purpose. The sensor with bad segments was deliberately chosen for algorithms development and to demonstrate sensitivity and power of new methods in uncovering sensor defects.

  2. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    Energy Technology Data Exchange (ETDEWEB)

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-19

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.

  3. Water surface capturing by image processing

    Science.gov (United States)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  4. [The model of adaptive primary image processing].

    Science.gov (United States)

    Dudkin, K N; Mironov, S V; Dudkin, A K; Chikhman, V N

    1998-07-01

    A computer model of adaptive segmentation of the 2D visual objects was developed. Primary image descriptions are realised via spatial frequency filters and feature detectors performing as self-organised mechanisms. Simulation of the control processes related to attention, lateral, frequency-selective and cross-orientation inhibition, determines the adaptive image processing.

  5. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  6. Statistical Smoothing Methods and Image Analysis

    Science.gov (United States)

    1988-12-01

    83 - 111. Rosenfeld, A. and Kak, A.C. (1982). Digital Picture Processing. Academic Press,Qrlando. Serra, J. (1982). Image Analysis and Mat hematical ...hypothesis testing. IEEE Trans. Med. Imaging, MI-6, 313-319. Wicksell, S.D. (1925) The corpuscle problem. A mathematical study of a biometric problem

  7. Reflections on ultrasound image analysis.

    Science.gov (United States)

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.

  8. SUPRIM: easily modified image processing software.

    Science.gov (United States)

    Schroeter, J P; Bretaudiere, J P

    1996-01-01

    A flexible, modular software package intended for the processing of electron microscopy images is presented. The system consists of a set of image processing tools or filters, written in the C programming language, and a command line style user interface based on the UNIX shell. The pipe and filter structure of UNIX and the availability of command files in the form of shell scripts eases the construction of complex image processing procedures from the simpler tools. Implementation of a new image processing algorithm in SUPRIM may often be performed by construction of a new shell script, using already existing tools. Currently, the package has been used for two- and three-dimensional image processing and reconstruction of macromolecules and other structures of biological interest.

  9. Image processing and communications challenges 5

    CERN Document Server

    2014-01-01

    This textbook collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts. Part I deals with image processing. A comprehensive survey of different methods  of image processing, computer vision  is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered. In conclusion, the edited book comprises papers on diverse aspects of image processing  and communications systems. There are theoretical aspects as well as application papers.

  10. Processing of food, body and emotional stimuli in anorexia nervosa: a systematic review and meta-analysis of functional magnetic resonance imaging studies.

    Science.gov (United States)

    Zhu, Yikang; Hu, Xiaochen; Wang, Jijun; Chen, Jue; Guo, Qian; Li, Chunbo; Enck, Paul

    2012-11-01

    The characteristics of the cognitive processing of food, body and emotional information in patients with anorexia nervosa (AN) are debatable. We reviewed functional magnetic resonance imaging studies to assess whether there were consistent neural basis and networks in the studies to date. Searching PubMed, Ovid, Web of Science, The Cochrane Library and Google Scholar between January 1980 and May 2012, we identified 17 relevant studies. Activation likelihood estimation was used to perform a quantitative meta-analysis of functional magnetic resonance imaging studies. For both food stimuli and body stimuli, AN patients showed increased hemodynamic response in the emotion-related regions (frontal, caudate, uncus, insula and temporal) and decreased activation in the parietal region. Although no robust brain activation has been found in response to emotional stimuli, emotion-related neural networks are involved in the processing of food and body stimuli among AN. It suggests that negative emotional arousal is related to cognitive processing bias of food and body stimuli in AN.

  11. Digital radiography image quality: image processing and display.

    Science.gov (United States)

    Krupinski, Elizabeth A; Williams, Mark B; Andriole, Katherine; Strauss, Keith J; Applegate, Kimberly; Wyatt, Margaret; Bjork, Sandra; Seibert, J Anthony

    2007-06-01

    This article on digital radiography image processing and display is the second of two articles written as part of an intersociety effort to establish image quality standards for digital and computed radiography. The topic of the other paper is digital radiography image acquisition. The articles were developed collaboratively by the ACR, the American Association of Physicists in Medicine, and the Society for Imaging Informatics in Medicine. Increasingly, medical imaging and patient information are being managed using digital data during acquisition, transmission, storage, display, interpretation, and consultation. The management of data during each of these operations may have an impact on the quality of patient care. These articles describe what is known to improve image quality for digital and computed radiography and to make recommendations on optimal acquisition, processing, and display. The practice of digital radiography is a rapidly evolving technology that will require timely revision of any guidelines and standards.

  12. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images.

    Science.gov (United States)

    Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin

    2017-06-27

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.

  13. Breast image pre-processing for mammographic tissue segmentation.

    Science.gov (United States)

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer

    2015-12-01

    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  14. Image processing for cameras with fiber bundle image relay.

    Science.gov (United States)

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection.

  15. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    Science.gov (United States)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  16. On some applications of diffusion processes for image processing

    Energy Technology Data Exchange (ETDEWEB)

    Morfu, S., E-mail: smorfu@u-bourgogne.f [Laboratoire d' Electronique, Informatique et Image (LE2i), UMR Cnrs 5158, Aile des Sciences de l' Ingenieur, BP 47870, 21078 Dijon Cedex (France)

    2009-06-29

    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  17. Hybrid Expert Systems In Image Analysis

    Science.gov (United States)

    Dixon, Mark J.; Gregory, Paul J.

    1987-04-01

    Vision systems capable of inspecting industrial components and assemblies have a large potential market if they can be easily programmed and produced quickly. Currently, vision application software written in conventional high-level languages such as C or Pascal are produced by experts in program design, image analysis, and process control. Applications written this way are difficult to maintain and modify. Unless other similar inspection problems can be found, the final program is essentially one-off redundant code. A general-purpose vision system targeted for the Visual Machines Ltd. C-VAS 3000 image processing workstation, is described which will make writing image analysis software accessible to the non-expert both in programming computers and image analysis. A significant reduction in the effort required to produce vision systems, will be gained through a graphically-driven interactive application generator. Finally, an Expert System will be layered on top to guide the naive user through the process of generating an application.

  18. Reference image selection for difference imaging analysis

    CERN Document Server

    Huckvale, Leo; Sale, Stuart E

    2014-01-01

    Difference image analysis (DIA) is an effective technique for obtaining photometry in crowded fields, relative to a chosen reference image. As yet, however, optimal reference image selection is an unsolved problem. We examine how this selection depends on the combination of seeing, background and detector pixel size. Our tests use a combination of simulated data and quality indicators from DIA of well-sampled optical data and under-sampled near-infrared data from the OGLE and VVV surveys, respectively. We search for a figure-of-merit (FoM) which could be used to select reference images for each survey. While we do not find a universally applicable FoM, survey-specific measures indicate that the effect of spatial under-sampling may require a change in strategy from the standard DIA approach, even though seeing remains the primary criterion. We find that background is not an important criterion for reference selection, at least for the dynamic range in the images we test. For our analysis of VVV data in particu...

  19. Applications of Digital Image Processing 11

    Science.gov (United States)

    Cho, Y. -C.

    1988-01-01

    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  20. Optical image processing by using a photorefractive spatial soliton waveguide

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Bao-Lai, E-mail: liangbaolai@gmail.com [College of Physics Science & Technology, Hebei University, Baoding 071002 (China); Wang, Ying; Zhang, Su-Heng; Guo, Qing-Lin; Wang, Shu-Fang; Fu, Guang-Sheng [College of Physics Science & Technology, Hebei University, Baoding 071002 (China); Simmonds, Paul J. [Department of Physics and Micron School of Materials Science & Engineering, Boise State University, Boise, ID 83725 (United States); Wang, Zhao-Qi [Institute of Modern Optics, Nankai University, Tianjin 300071 (China)

    2017-04-04

    By combining the photorefractive spatial soliton waveguide of a Ce:SBN crystal with a coherent 4-f system we are able to manipulate the spatial frequencies of an input optical image to perform edge-enhancement and direct component enhancement operations. Theoretical analysis of this optical image processor is presented to interpret the experimental observations. This work provides an approach for optical image processing by using photorefractive spatial solitons. - Highlights: • A coherent 4-f system with the spatial soliton waveguide as spatial frequency filter. • Manipulate the spatial frequencies of an input optical image. • Achieve edge-enhancement and direct component enhancement operations of an optical image.

  1. ANALYSIS OF FUNDUS IMAGES

    DEFF Research Database (Denmark)

    2000-01-01

    A method classifying objects man image as respective arterial or venous vessels comprising: identifying pixels of the said modified image which are located on a line object, determining which of the said image points is associated with crossing point or a bifurcation of the respective line object......, wherein a crossing point is represented by an image point which is the intersection of four line segments, performing a matching operation on pairs of said line segments for each said crossing point, to determine the path of blood vessels in the image, thereby classifying the line objects in the original...... image into two arbitrary sets, and thereafter designating one of the sets as representing venous structure, the other of the sets as representing arterial structure, depending on one or more of the following criteria: (a) complexity of structure; (b) average density; (c) average width; (d) tortuosity...

  2. Design for embedded image processing on FPGAs

    CERN Document Server

    Bailey, Donald G

    2011-01-01

    "Introductory material will consider the problem of embedded image processing, and how some of the issues may be solved using parallel hardware solutions. Field programmable gate arrays (FPGAs) are introduced as a technology that provides flexible, fine-grained hardware that can readily exploit parallelism within many image processing algorithms. A brief review of FPGA programming languages provides the link between a software mindset normally associated with image processing algorithms, and the hardware mindset required for efficient utilization of a parallel hardware design. The bulk of the book will focus on the design process, and in particular how designing an FPGA implementation differs from a conventional software implementation. Particular attention is given to the techniques for mapping an algorithm onto an FPGA implementation, considering timing, memory bandwidth and resource constraints, and efficient hardware computational techniques. Extensive coverage will be given of a range of image processing...

  3. Imaging process and VIP engagement

    Directory of Open Access Journals (Sweden)

    Starčević Slađana

    2007-01-01

    Full Text Available It's often quoted that celebrity endorsement advertising has been recognized as "an ubiquitous feature of the modern marketing". The researches have shown that this kind of engagement has been producing significantly more favorable reactions of consumers, that is, a higher level of an attention for the advertising messages, a better recall of the message and a brand name, more favorable evaluation and purchasing intentions of the brand, in regard to engagement of the non-celebrity endorsers. A positive influence on a firm's profitability and prices of stocks has also been shown. Therefore marketers leaded by the belief that celebrities represent the effective ambassadors in building of positive brand image or company image and influence an improvement of the competitive position, invest enormous amounts of money for signing the contracts with them. However, this strategy doesn't guarantee success in any case, because it's necessary to take into account many factors. This paper summarizes the results of previous researches in this field and also the recommendations for a more effective use of this kind of advertising.

  4. Crack Length Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    1990-01-01

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better then that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  5. Crack Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal, Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better than that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  6. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  7. Subband/Transform MATLAB Functions For Processing Images

    Science.gov (United States)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  8. Optical image processing by using a photorefractive spatial soliton waveguide

    Science.gov (United States)

    Liang, Bao-Lai; Wang, Ying; Zhang, Su-Heng; Guo, Qing-Lin; Wang, Shu-Fang; Fu, Guang-Sheng; Simmonds, Paul J.; Wang, Zhao-Qi

    2017-04-01

    By combining the photorefractive spatial soliton waveguide of a Ce:SBN crystal with a coherent 4-f system we are able to manipulate the spatial frequencies of an input optical image to perform edge-enhancement and direct component enhancement operations. Theoretical analysis of this optical image processor is presented to interpret the experimental observations. This work provides an approach for optical image processing by using photorefractive spatial solitons.

  9. Subband/Transform MATLAB Functions For Processing Images

    Science.gov (United States)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  10. Quantum imaging as an ancilla-assisted process tomography

    Science.gov (United States)

    Ghalaii, M.; Afsary, M.; Alipour, S.; Rezakhani, A. T.

    2016-10-01

    We show how a recent experiment of quantum imaging with undetected photons can basically be described as an (a partial) ancilla-assisted process tomography in which the object is described by an amplitude-damping quantum channel. We propose a simplified quantum circuit version of this scenario, which also enables one to recast quantum imaging in quantum computation language. Our analogy and analysis may help us to better understand the role of classical and/or quantum correlations in imaging experiments.

  11. Processing of hyperspectral medical images applications in dermatology using Matlab

    CERN Document Server

    Koprowski, Robert

    2017-01-01

    This book presents new methods of analyzing and processing hyperspectral medical images, which can be used in diagnostics, for example for dermatological images. The algorithms proposed are fully automatic and the results obtained are fully reproducible. Their operation was tested on a set of several thousands of hyperspectral images and they were implemented in Matlab. The presented source code can be used without licensing restrictions. This is a valuable resource for computer scientists, bioengineers, doctoral students, and dermatologists interested in contemporary analysis methods.

  12. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  13. Lung Cancer Detection Using Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Mokhled S. AL-TARAWNEH

    2012-08-01

    Full Text Available Recently, image processing techniques are widely used in several medical areas for image improvement in earlier detection and treatment stages, where the time factor is very important to discover the abnormality issues in target images, especially in various cancer tumours such as lung cancer, breast cancer, etc. Image quality and accuracy is the core factors of this research, image quality assessment as well as improvement are depending on the enhancement stage where low pre-processing techniques is used based on Gabor filter within Gaussian rules. Following the segmentation principles, an enhanced region of the object of interest that is used as a basic foundation of feature extraction is obtained. Relying on general features, a normality comparison is made. In this research, the main detected features for accurate images comparison are pixels percentage and mask-labelling.

  14. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    2011-01-01

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  15. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  16. Oncological image analysis: medical and molecular image analysis

    Science.gov (United States)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  17. Integrating NASA's Land Analysis System (LAS) image processing software with an appropriate Geographic Information System (GIS): A review of candidates in the public domain

    Science.gov (United States)

    Rochon, Gilbert L.

    1989-01-01

    A user requirements analysis (URA) was undertaken to determine and appropriate public domain Geographic Information System (GIS) software package for potential integration with NASA's LAS (Land Analysis System) 5.0 image processing system. The necessity for a public domain system was underscored due to the perceived need for source code access and flexibility in tailoring the GIS system to the needs of a heterogenous group of end-users, and to specific constraints imposed by LAS and its user interface, Transportable Applications Executive (TAE). Subsequently, a review was conducted of a variety of public domain GIS candidates, including GRASS 3.0, MOSS, IEMIS, and two university-based packages, IDRISI and KBGIS. The review method was a modified version of the GIS evaluation process, development by the Federal Interagency Coordinating Committee on Digital Cartography. One IEMIS-derivative product, the ALBE (AirLand Battlefield Environment) GIS, emerged as the most promising candidate for integration with LAS. IEMIS (Integrated Emergency Management Information System) was developed by the Federal Emergency Management Agency (FEMA). ALBE GIS is currently under development at the Pacific Northwest Laboratory under contract with the U.S. Army Corps of Engineers' Engineering Topographic Laboratory (ETL). Accordingly, recommendations are offered with respect to a potential LAS/ALBE GIS linkage and with respect to further system enhancements, including coordination with the development of the Spatial Analysis and Modeling System (SAMS) GIS in Goddard's IDM (Intelligent Data Management) developments in Goddard's National Space Science Data Center.

  18. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  19. Energy preserving QMF for image processing.

    Science.gov (United States)

    Lian, Jian-ao; Wang, Yonghui

    2014-07-01

    Implementation of new biorthogonal filter banks (BFB) for image compression and denoising is performed, using test images with diversified characteristics. These new BFB’s are linear-phase, have odd lengths, and with a critical feature, namely, the filters preserve signal energy very well. Experimental results show that the proposed filter banks demonstrate promising performance improvement over the filter banks of those widely used in the image processing area, such as the CDF 9/7.

  20. Selection of Software Development Environment for Image Processing and Analysis%图象处理与分析软件开发环境选择

    Institute of Scientific and Technical Information of China (English)

    张昕; 童恒建; 陈晓文; 王海

    2012-01-01

    With the rapid development of aerospace technologies, remote sensing sensor technologies, communication technologies and computer technologies, high spatial resolution or hyperspeetral remote sensing images are widely applied to various industries and a- gricultures. Quick display and browse of a large remote sensing image is an important feature for remote sensing image processing and analysis software. To select software development environments and tools is the first important task for sciences research or com- mercial software development. This paper discusses the advantages and disadvantages of MFC, DirectX, OpenGL, Qt in the image display. The conclusion can be referred while selecting software development environments.%随着航空航天、传感器、通信和计算机等技术的发展,高空间分辨率或高光谱等遥感图象已应泛地应用于各行各业中。快速显示与浏览大的遥感图象是遥感图象处理与分析软件的一个重要功能。无论是科学研究、还是商品化软件开发,首要的任务是选择软件开发的环境和工具。本文主要讨论MFC、DirectX、OpenGL、Qt在图象显示方面的优势和缺点,供大家在选择开发环境和工具时参考。

  1. Chemical process hazards analysis

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-01

    The Office of Worker Health and Safety (EH-5) under the Assistant Secretary for the Environment, Safety and Health of the US Department (DOE) has published two handbooks for use by DOE contractors managing facilities and processes covered by the Occupational Safety and Health Administration (OSHA) Rule for Process Safety Management of Highly Hazardous Chemicals (29 CFR 1910.119), herein referred to as the PSM Rule. The PSM Rule contains an integrated set of chemical process safety management elements designed to prevent chemical releases that can lead to catastrophic fires, explosions, or toxic exposures. The purpose of the two handbooks, ``Process Safety Management for Highly Hazardous Chemicals`` and ``Chemical Process Hazards Analysis,`` is to facilitate implementation of the provisions of the PSM Rule within the DOE. The purpose of this handbook ``Chemical Process Hazards Analysis,`` is to facilitate, within the DOE, the performance of chemical process hazards analyses (PrHAs) as required under the PSM Rule. It provides basic information for the performance of PrHAs, and should not be considered a complete resource on PrHA methods. Likewise, to determine if a facility is covered by the PSM rule, the reader should refer to the handbook, ``Process Safety Management for Highly Hazardous Chemicals`` (DOE- HDBK-1101-96). Promulgation of the PSM Rule has heightened the awareness of chemical safety management issues within the DOE. This handbook is intended for use by DOE facilities and processes covered by the PSM rule to facilitate contractor implementation of the PrHA element of the PSM Rule. However, contractors whose facilities and processes not covered by the PSM Rule may also use this handbook as a basis for conducting process hazards analyses as part of their good management practices. This handbook explains the minimum requirements for PrHAs outlined in the PSM Rule. Nowhere have requirements been added beyond what is specifically required by the rule.

  2. Image processing and computing in structural biology

    NARCIS (Netherlands)

    Jiang, Linhua

    2009-01-01

    With the help of modern techniques of imaging processing and computing, image data obtained by electron cryo-microscopy of biomolecules can be reconstructed to three-dimensional biological models at sub-nanometer resolution. These models allow answering urgent problems in life science, for instance,

  3. Digital Image Processing in Private Industry.

    Science.gov (United States)

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  4. Automatic quantification of crack patterns by image processing

    Science.gov (United States)

    Liu, Chun; Tang, Chao-Sheng; Shi, Bin; Suo, Wen-Bin

    2013-08-01

    Image processing technologies are proposed to quantify crack patterns. On the basis of the technologies, a software "Crack Image Analysis System" (CIAS) has been developed. An image of soil crack network is used as an example to illustrate the image processing technologies and the operations of the CIAS. The quantification of the crack image involves the following three steps: image segmentation, crack identification and measurement. First, the image is converted to a binary image using a cluster analysis method; noise in the binary image is removed; and crack spaces are fused. Then, the medial axis of the crack network is extracted from the binary image, with which nodes and crack segments can be identified. Finally, various geometric parameters of the crack network can be calculated automatically, such as node number, crack number, clod area, clod perimeter, crack area, width, length, and direction. The thresholds used in the operations are specified by cluster analysis and other innovative methods. As a result, the objects (nodes, cracks and clods) in the crack network can be quantified automatically. The software may be used to study the generation and development of soil crack patterns and rock fractures.

  5. Signal and Image Processing with Sinlets

    CERN Document Server

    Davydov, Alexander Y

    2012-01-01

    This paper presents a new family of localized orthonormal bases - sinlets - which are well suited for both signal and image processing and analysis. One-dimensional sinlets are related to specific solutions of the time-dependent harmonic oscillator equation. By construction, each sinlet is differentiable infinitely many times and has a well-defined and smoothly-varied instantaneous frequency known in analytical form. For square-integrable transient signals with infinite support, one-dimensional sinlet basis provides an advantageous alternative to the Fourier transform by rendering accurate signal representation via a countable set of real-valued coefficients. The properties of sinlets make them suitable for analyzing many real-world signals whose frequency content changes with time including radar and sonar waveforms, music, speech, biological echolocation sounds, biomedical signals, seismic acoustic waves, and signals employed in wireless communication systems. One-dimensional sinlet bases can be used to con...

  6. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    Science.gov (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images.

  7. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail: M.H.Yap@lboro.ac.uk; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2010-03-15

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  8. Improved Strategies for Parallel Medical Image Processing Applications

    Institute of Scientific and Technical Information of China (English)

    WANG Kun; WANG Xiao-ying; LI San-li; CHEN Ying

    2008-01-01

    In order to meet the demands of high efficient and real-time computer assisted diagnosis as well as screening in medical area, to improve the efficacy of parallel medical image processing is of great importance. This article proposes improved strategies for parallel medical image processing applications,which is categorized into two genera. For each genus individual strategy is devised, including the theoretic algorithm for minimizing the exertion time. Experiment using mammograms not only justifies the validity of the theoretic analysis, with reasonable difference between the theoretic and measured value, but also shows that when adopting the improved strategies, efficacy of medical image parallel processing is improved greatly.

  9. Checking Fits With Digital Image Processing

    Science.gov (United States)

    Davis, R. M.; Geaslen, W. D.

    1988-01-01

    Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

  10. Currency Recognition System Using Image Processing

    Directory of Open Access Journals (Sweden)

    S. M. Saifullah

    2015-11-01

    Full Text Available In the last few years a great technological advances in color printing, duplicating and scanning, counterfeiting problems have become more serious. In past only authorized printing house has the ability to make currency paper, but now a days it is possible for anyone to print fake bank note with the help of modern technology such as computer, laser printer. Fake notes are burning questions in almost every country. Like others country Bangladesh has also hit really heard and has become a very acute problem. Therefore there is a need to design a currency recognition system that can easily make a difference between real and fake banknote and the process will time consuming. Our system describes an approach for verification of Bangladeshi currency banknotes. The currency will be verified by using image processing techniques. The approach consists of a number of components including image processing, image segmentation, feature extraction, comparing images. The system is designed by MATLAB. Image processing involves changing the nature of an image in order to improve its pictorial information for human interpretation. The image processing software is a collection of functions that extends the capability of the MATLAB numeric computing environment. The result will be whether currency is real or fake.

  11. Recent developments in digital image processing at the Image Processing Laboratory of JPL.

    Science.gov (United States)

    O'Handley, D. A.

    1973-01-01

    Review of some of the computer-aided digital image processing techniques recently developed. Special attention is given to mapping and mosaicking techniques and to preliminary developments in range determination from stereo image pairs. The discussed image processing utilization areas include space, biomedical, and robotic applications.

  12. Imaging fault zones using 3D seismic image processing techniques

    Science.gov (United States)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  13. Extraction of subjective properties in image processing

    OpenAIRE

    2002-01-01

    Most of the present digital images processing methods are related with objective characterization of external properties as shape, form or colour. This information concerns objective characteristics of different bodies and is applied to extract details to perform several different tasks. But in some occasions, some other type of information is needed. This is the case when the image processing system is going to be applied to some operation related with living bodies. In this case, some other...

  14. Proceedings of the international society for optical engineering biomedical image processing 2

    Energy Technology Data Exchange (ETDEWEB)

    Bovik, A.G.; Howard, V.

    1991-01-01

    This book contains the proceedings of biomedical image processing. Topics covered include: Filtering and reconstruction of biomedical images; analysis, classification and recognition of biomedical images; and 3-D microscopy.

  15. Paraxial ghost image analysis

    Science.gov (United States)

    Abd El-Maksoud, Rania H.; Sasian, José M.

    2009-08-01

    This paper develops a methodology to model ghost images that are formed by two reflections between the surfaces of a multi-element lens system in the paraxial regime. An algorithm is presented to generate the ghost layouts from the nominal layout. For each possible ghost layout, paraxial ray tracing is performed to determine the ghost Gaussian cardinal points, the size of the ghost image at the nominal image plane, the location and diameter of the ghost entrance and exit pupils, and the location and diameter for the ghost entrance and exit windows. The paraxial ghost irradiance point spread function is obtained by adding up the irradiance contributions for all ghosts. Ghost simulation results for a simple lens system are provided. This approach provides a quick way to analyze ghost images in the paraxial regime.

  16. Challenges in 3DTV image processing

    Science.gov (United States)

    Redert, André; Berretty, Robert-Paul; Varekamp, Chris; van Geest, Bart; Bruijns, Jan; Braspenning, Ralph; Wei, Qingqing

    2007-01-01

    Philips provides autostereoscopic three-dimensional display systems that will bring the next leap in visual experience, adding true depth to video systems. We identified three challenges specifically for 3D image processing: 1) bandwidth and complexity of 3D images, 2) conversion of 2D to 3D content, and 3) object-based image/depth processing. We discuss these challenges and our solutions via several examples. In conclusion, the solutions have enabled the market introduction of several professional 3D products, and progress is made rapidly towards consumer 3DTV.

  17. Hexagonal image processing a practical approach

    CERN Document Server

    Middleton, Lee

    2006-01-01

    This book provides an introduction to the processing of hexagonally sampled images, includes a survey of the work done in the field, and presents a novel framework for hexagonal image processing (HIP) based on hierarchical aggregates. The strengths offered by hexagonal lattices over square lattices to define digital images are considerable: higher packing density; uniform connectivity of points (pixels) in the lattice; better angular resolution by virtue of having more nearest neighbours; and superlative representation of curves. The utility of the HIP framework is shown by implementing severa

  18. Fingerprint image enhancement by differential hysteresis processing.

    Science.gov (United States)

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  19. Intelligent Information Processing in Imaging Fuzes

    Institute of Scientific and Technical Information of China (English)

    王克勇; 郑链; 宋承天

    2003-01-01

    In order to study the problem of intelligent information processing in new types of imaging fuze, the method of extracting the invariance features of target images is adopted, and radial basis function neural network is used to recognize targets. Owing to its ability of parallel processing, its robustness and generalization, the method can realize the recognition of the conditions of missile-target encounters, and meet the requirements of real-time recognition in the imaging fuze. It is shown that based on artificial neural network target recognition and burst point control are feasible.

  20. Adaptive filters for color image processing

    Directory of Open Access Journals (Sweden)

    Papanikolaou V.

    1998-01-01

    Full Text Available The color filters that are used to attenuate noise are usually optimized to perform extremely well when dealing with certain noise distributions. Unfortunately it is often the case that the noise corrupting the image is not known. It is thus beneficial to know a priori the type of noise corrupting the image in order to select the optimal filter. A method of extracting and characterizing the noise within a digital color image using the generalized Gaussian probability density function (pdf (B.D. Jeffs and W.H. Pun, IEEE Transactions on Image Processing, 4(10, 1451–1456, 1995 and Proceedings of the Int. Conference on Image Processing, 465–468, 1996, is presented. In this paper simulation results are included to demonstrate the effectiveness of the proposed methodology.

  1. Adaptive filters for color image processing

    Directory of Open Access Journals (Sweden)

    V. Papanikolaou

    1999-01-01

    Full Text Available The color filters that are used to attenuate noise are usually optimized to perform extremely well when dealing with certain noise distributions. Unfortunately it is often the case that the noise corrupting the image is not known. It is thus beneficial to know a priori the type of noise corrupting the image in order to select the optimal filter. A method of extracting and characterizing the noise within a digital color image using the generalized Gaussian probability density function (pdf (B.D. Jeffs and W.H. Pun, IEEE Transactions on Image Processing, 4(10, 1451–1456, 1995 and Proceedings of the Int. Conference on Image Processing, 465–468, 1996, is presented. In this paper simulation results are included to demonstrate the effectiveness of the proposed methodology.

  2. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  3. Study of time-lapse processing for dynamic hydrologic conditions. [electronic satellite image analysis console for Earth Resources Technology Satellites imagery

    Science.gov (United States)

    Serebreny, S. M.; Evans, W. E.; Wiegman, E. J.

    1974-01-01

    The usefulness of dynamic display techniques in exploiting the repetitive nature of ERTS imagery was investigated. A specially designed Electronic Satellite Image Analysis Console (ESIAC) was developed and employed to process data for seven ERTS principal investigators studying dynamic hydrological conditions for diverse applications. These applications include measurement of snowfield extent and sediment plumes from estuary discharge, Playa Lake inventory, and monitoring of phreatophyte and other vegetation changes. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. The most unique feature of the system is the capability to time lapse the imagery and analytic displays of the imagery. Data products included quantitative measurements of distances and areas, binary thematic maps based on monospectral or multispectral decisions, radiance profiles, and movie loops. Applications of animation for uses other than creating time-lapse sequences are identified. Input to the ESIAC can be either digital or via photographic transparencies.

  4. Digital image processing of bone - Problems and potentials

    Science.gov (United States)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  5. Ridge extraction from the time-frequency representation (TFR) of signals based on an image processing approach: application to the analysis of uterine electromyogram AR TFR.

    Science.gov (United States)

    Terrien, Jérémy; Marque, Catherine; Germain, Guy

    2008-05-01

    Time-frequency representations (TFRs) of signals are increasingly being used in biomedical research. Analysis of such representations is sometimes difficult, however, and is often reduced to the extraction of ridges, or local energy maxima. In this paper, we describe a new ridge extraction method based on the image processing technique of active contours or snakes. We have tested our method on several synthetic signals and for the analysis of uterine electromyogram or electrohysterogram (EHG) recorded during gestation in monkeys. We have also evaluated a postprocessing algorithm that is especially suited for EHG analysis. Parameters are evaluated on real EHG signals in different gestational periods. The presented method gives good results when applied to synthetic as well as EHG signals. We have been able to obtain smaller ridge extraction errors when compared to two other methods specially developed for EHG. The gradient vector flow (GVF) snake method, or GVF-snake method, appears to be a good ridge extraction tool, which could be used on TFR of mono or multicomponent signals with good results.

  6. Bistatic SAR: Signal Processing and Image Formation.

    Energy Technology Data Exchange (ETDEWEB)

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  7. Image Analysis for Tongue Characterization

    Institute of Scientific and Technical Information of China (English)

    SHENLansun; WEIBaoguo; CAIYiheng; ZHANGXinfeng; WANGYanqing; CHENJing; KONGLingbiao

    2003-01-01

    Tongue diagnosis is one of the essential methods in traditional Chinese medical diagnosis. The ac-curacy of tongue diagnosis can be improved by tongue char-acterization. This paper investigates the use of image anal-ysis techniques for tongue characterization by evaluating visual features obtained from images. A tongue imaging and analysis instrument (TIAI) was developed to acquire digital color tongue images. Several novel approaches are presented for color calibration, tongue area segmentation,quantitative analysis and qualitative description for the colors of tongue and its coating, the thickness and moisture of coating and quantification of the cracks of the toilgue.The overall accuracy of the automatic analysis of the colors of tongue and the thickness of tongue coating exceeds 85%.This work shows the promising future of tongue character-ization.

  8. Image processing and mathematical morphology fundamentals and applications

    CERN Document Server

    Shih, Frank Y

    2009-01-01

    In the development of digital multimedia, the importance and impact of image processing and mathematical morphology are well documented in areas ranging from automated vision detection and inspection to object recognition, image analysis and pattern recognition. Those working in these ever-evolving fields require a solid grasp of basic fundamentals, theory, and related applications and few books can provide the unique tools for learning contained in this text. Image Processing and Mathematical Morphology: Fundamentals and Applications is a comprehensive, wide-ranging overview of morphological

  9. Low cost 3D scanning process using digital image processing

    Science.gov (United States)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  10. Thermal Imaging Processes of Polymer Nanocomposite Coatings

    Science.gov (United States)

    Meth, Jeffrey

    2015-03-01

    Laser induced thermal imaging (LITI) is a process whereby infrared radiation impinging on a coating on a donor film transfers that coating to a receiving film to produce a pattern. This talk describes how LITI patterning can print color filters for liquid crystal displays, and details the physical processes that are responsible for transferring the nanocomposite coating in a coherent manner that does not degrade its optical properties. Unique features of this process involve heating rates of 107 K/s, and cooling rates of 104 K/s, which implies that not all of the relaxation modes of the polymer are accessed during the imaging process. On the microsecond time scale, the polymer flow is forced by devolatilization of solvents, followed by deformation akin to the constrained blister test, and then fracture caused by differential thermal expansion. The unique combination of disparate physical processes demonstrates the gamut of physics that contribute to advanced material processing in an industrial setting.

  11. Solar Image Analysis and Visualization

    CERN Document Server

    Ireland, J

    2009-01-01

    This volume presents a selection of papers on the state of the art of image enhancement, automated feature detection, machine learning, and visualization tools in support of solar physics that focus on the challenges presented by new ground-based and space-based instrumentation. The articles and topics were inspired by the Third Solar Image Processing Workshop, held at Trinity College Dublin, Ireland but contributions from other experts have been included as well. This book is mainly aimed at researchers and graduate students working on image processing and compter vision in astronomy and solar physics.

  12. Fundamental Concepts of Digital Image Processing

    Science.gov (United States)

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  13. Fundamental concepts of digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  14. Image Processing in Intelligent Medical Robotic Systems

    Directory of Open Access Journals (Sweden)

    Shashev Dmitriy

    2016-01-01

    Full Text Available The paper deals with the use of high-performance computing systems with the parallel-operation architecture in intelligent medical systems, such as medical robotic systems, based on a computer vision system, is an automatic control system with the strict requirements, such as high reliability, accuracy and speed of performance. It shows the basic block-diagram of an automatic control system based on a computer vision system. The author considers the possibility of using a reconfigurable computing environment in such systems. The design principles of the reconfigurable computing environment allows to improve a reliability, accuracy and performance of whole system many times. The article contains the brief overview and the theory of the research, demonstrates the use of reconfigurable computing environments for the image preprocessing, namely morphological image processing operations. Present results of the successful simulation of the reconfigurable computing environment and implementation of the morphological image processing operations on the test image in the MATLAB Simulink.

  15. Scanning electron microscopy combined with image processing technique: Analysis of microstructure, texture and tenderness in Semitendinous and Gluteus Medius bovine muscles.

    Science.gov (United States)

    Pieniazek, Facundo; Messina, Valeria

    2016-11-01

    In this study the effect of freeze drying on the microstructure, texture, and tenderness of Semitendinous and Gluteus Medius bovine muscles were analyzed applying Scanning Electron Microscopy combined with image analysis. Samples were analyzed by Scanning Electron Microscopy at different magnifications (250, 500, and 1,000×). Texture parameters were analyzed by Texture analyzer and by image analysis. Tenderness by Warner-Bratzler shear force. Significant differences (p image and instrumental texture features. A linear trend with a linear correlation was applied for instrumental and image features. Image texture features calculated from Gray Level Co-occurrence Matrix (homogeneity, contrast, entropy, correlation and energy) at 1,000× in both muscles had high correlations with instrumental features (chewiness, hardness, cohesiveness, and springiness). Tenderness showed a positive correlation in both muscles with image features (energy and homogeneity). Combing Scanning Electron Microscopy with image analysis can be a useful tool to analyze quality parameters in meat.Summary SCANNING 38:727-734, 2016. © 2016 Wiley Periodicals, Inc.

  16. Simulation Analysis of Cylindrical Panoramic Image Mosaic

    Directory of Open Access Journals (Sweden)

    ZHU Ningning

    2017-04-01

    Full Text Available With the rise of virtual reality (VR technology, panoramic images are used more widely, which obtained by multi-camera stitching and take advantage of homography matrix and image transformation, however, this method will destroy the collinear condition, make it's difficult to 3D reconstruction and other work. This paper proposes a new method for cylindrical panoramic image mosaic, which set the number of mosaic camera, imaging focal length, imaging position and imaging attitude, simulate the mapping process of multi-camera and construct cylindrical imaging equation from 3D points to 2D image based on photogrammetric collinearity equations. This cylindrical imaging equation can not only be used for panoramic stitching, but also be used for precision analysis, test results show: ①this method can be used for panoramic stitching under the condition of multi-camera and incline imaging; ②the accuracy of panoramic stitching is affected by 3 kinds of parameter errors including focus, displacement and rotation angle, in which focus error can be corrected by image resampling, displacement error is closely related to object distance and rotation angle error is affected mainly by the number of cameras.

  17. Image Processing by Compression: An Overview

    OpenAIRE

    2012-01-01

    International audience; This article aims to present the various applications of data compression in image processing. Since some time ago, several research groups have been developing methods based on different data compression techniques to classify, segment, filter and detect digital images fakery. It is necessary to analyze the relationship between different methods and put them into a framework to better understand and better exploit the possibilities that compression provides us respect...

  18. Compression Techniques for Image Processing Tasks

    OpenAIRE

    2013-01-01

    International audience; This article aims to present an overview of the different applications of data compression techniques in the image processing filed. Since some time ago, several research groups in the world have been developing various methods based on different data compression techniques to classify, segment, filter and detect digital images fakery. In this sense, it is necessary to analyze and clarify the relationship between different methods and put them into a framework to bette...

  19. PCB Fault Detection Using Image Processing

    Science.gov (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.

    2017-08-01

    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  20. Digital image processing in neutron radiography

    CERN Document Server

    Körner, S

    2000-01-01

    automated neutron tomography facility has been built at the Atominstitut with this detector. Digital Image Processing: Due to special detector properties of the CCD-camera NR detector, a standard image processing procedure has been developed that always has to be applied, when the CCD-detector is used. It consists of the following steps: white spot correction - dark frame correction and flat field correction. Radiation, which hits the CCD-chip causes an overflow of one or several pixels, which appears in the image as white spots. These disturbing spots have to be removed by means of digital image processing. Several filters have been tested, but the results were insufficient. Therefore, a new threshold-median-mean value filter was designed and a proper code was written in IDL (interactive data language). The new filter removes white spots very well by hardly blurring the images. A dark frame is an image made with closed camera shutter. It contains undesired detector signal caused by read out noise and dark cu...

  1. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  2. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  3. Matlab-supported undergraduate image processing instruction

    Science.gov (United States)

    Dawant, Benoit M.

    1998-06-01

    More and more often, undergraduate students express the desire to take a course on image processing. These students will learn the most if the theory and algorithms covered in class can be not only illustrated through examples shown by the instructor during class but also coded, tested, and evaluated by the class participants. In the past, the major hurdle to developing a hands-on approach to image processing instruction has been the amount of programming required to implement relatively simple applications. Typical undergraduate students lack experience with low level programming languages and time is spent teaching the language itself rather than experimenting with the algorithms. High level and interpreted programming languages such as Matlab permit to address this question. Even with very little practical exposure to the language, students can rapidly develop the level of skills required to implement a range of image processing algorithms. This presentation will go over the material covered in a senior level introductory course in image processing taught at Vanderbilt University. The course itself is taught in a traditional way but it is supported by laboratories during which students are asked to implement algorithms ranging from connected component labeling to image deblurring. The students are also assigned projects that span several weeks. Examples of such assignments and projects are presented.

  4. Support Routines for In Situ Image Processing

    Science.gov (United States)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  5. Real-time image and video processing

    CERN Document Server

    Kehtarnavaz, Nasser

    2006-01-01

    This book presents an overview of the guidelines and strategies for transitioning an image or video processing algorithm from a research environment into a real-time constrained environment. Such guidelines and strategies are scattered in the literature of various disciplines including image processing, computer engineering, and software engineering, and thus have not previously appeared in one place. By bringing these strategies into one place, the book is intended to serve the greater community of researchers, practicing engineers, industrial professionals, who are interested in taking an im

  6. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  7. Etching and image analysis of the microstructure in marble

    DEFF Research Database (Denmark)

    Alm, Ditte; Brix, Susanne; Howe-Rasmussen, Helle

    2005-01-01

    of grains exposed on that surface are measured on the microscope images using image analysis by the program Adobe Photoshop 7.0 with Image Processing Toolkit 4.0. The parameters measured by the program on microscope images of thin sections of two marble types are used for calculation of the coefficient...

  8. The Pre-Processing of Images Technique for the Materia

    Directory of Open Access Journals (Sweden)

    Yevgeniy P. Putyatin

    2016-08-01

    Full Text Available The image processing analysis is one of the most powerful tool in various research fields, especially in material / polymer science. Therefore in the present article an attempt has been made for study of pre-processing of images technique of the material samples during the images taken out by Scanning Electron Microscope (SEM. First we prepared the material samples with coir fibre (natural and its polymer composite after that the image analysis has been performed by SEM technique and later on the said studies have been conducted. The results presented here were found satisfactory and also are in good agreement with our earlier work and some other worker in the same field.

  9. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David

    2013-12-01

    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  10. VIPS: an image processing system for large images

    Science.gov (United States)

    Cupitt, John; Martinez, Kirk

    1996-02-01

    This paper describes VIPS (VASARI Image Processing System), an image processing system developed by the authors in the course of the EU-funded projects VASARI (1989-1992) and MARC (1992-1995). VIPS implements a fully demand-driven dataflow image IO (input- output) system. Evaluation of library functions is delayed for as long as possible. When evaluation does occur, all delayed operations evaluate together in a pipeline, requiring no space for storing intermediate images and no unnecessary disc IO. If more than one CPU is available, then VIPS operations will automatically evaluate in parallel, giving an approximately linear speed-up. The evaluation system can be controlled by the application programmer. We have implemented a user-interface for the VIPS library which uses expose events in an X window rather than disc output to drive evaluation. This makes it possible, for example, for the user to rotate an 800 MByte image by 12 degrees and immediately scroll around the result.

  11. Applications of nuclear magnetic resonance imaging in process engineering

    Science.gov (United States)

    Gladden, Lynn F.; Alexander, Paul

    1996-03-01

    During the past decade, the application of nuclear magnetic resonance (NMR) imaging techniques to problems of relevance to the process industries has been identified. The particular strengths of NMR techniques are their ability to distinguish between different chemical species and to yield information simultaneously on the structure, concentration distribution and flow processes occurring within a given process unit. In this paper, examples of specific applications in the areas of materials and food processing, transport in reactors and two-phase flow are discussed. One specific study, that of the internal structure of a packed column, is considered in detail. This example is reported to illustrate the extent of new, quantitative information of generic importance to many processing operations that can be obtained using NMR imaging in combination with image analysis.

  12. Qualitative and quantitative interpretation of SEM image using digital image processing.

    Science.gov (United States)

    Saladra, Dawid; Kopernik, Magdalena

    2016-10-01

    The aim of the this study is improvement of qualitative and quantitative analysis of scanning electron microscope micrographs by development of computer program, which enables automatic crack analysis of scanning electron microscopy (SEM) micrographs. Micromechanical tests of pneumatic ventricular assist devices result in a large number of micrographs. Therefore, the analysis must be automatic. Tests for athrombogenic titanium nitride/gold coatings deposited on polymeric substrates (Bionate II) are performed. These tests include microshear, microtension and fatigue analysis. Anisotropic surface defects observed in the SEM micrographs require support for qualitative and quantitative interpretation. Improvement of qualitative analysis of scanning electron microscope images was achieved by a set of computational tools that includes binarization, simplified expanding, expanding, simple image statistic thresholding, the filters Laplacian 1, and Laplacian 2, Otsu and reverse binarization. Several modifications of the known image processing techniques and combinations of the selected image processing techniques were applied. The introduced quantitative analysis of digital scanning electron microscope images enables computation of stereological parameters such as area, crack angle, crack length, and total crack length per unit area. This study also compares the functionality of the developed computer program of digital image processing with existing applications. The described pre- and postprocessing may be helpful in scanning electron microscopy and transmission electron microscopy surface investigations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  13. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  14. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images.

  15. Health monitoring of rocket engines using image processing

    Science.gov (United States)

    Disimile, Peter J.; Shoe, Bridget; Toy, Norman

    1991-07-01

    Analysis of spectral and video data for anomalous events occurring in the exhaust plume of the Space Shuttle Main Engine (SSME) has shown that the improved time resolution of video tape increases the detection rate of anomalies in the visual region. Preliminary developments and applications of image processing techniques are used to extract information from video data of the SSME exhaust plume. Images have been enhanced to show the exhaust plume shock structure and for the isolation of an anomalous event.

  16. Processing Images of Craters for Spacecraft Navigation

    Science.gov (United States)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  17. Union operation image processing of data cubes separately processed by different objective filters and its application to void analysis in an all-solid-state lithium-ion battery.

    Science.gov (United States)

    Yamamoto, Yuta; Iriyama, Yasutoshi; Muto, Shunsuke

    2016-04-01

    In this article, we propose a smart image-analysis method suitable for extracting target features with hierarchical dimension from original data. The method was applied to three-dimensional volume data of an all-solid lithium-ion battery obtained by the automated sequential sample milling and imaging process using a focused ion beam/scanning electron microscope to investigate the spatial configuration of voids inside the battery. To automatically fully extract the shape and location of the voids, three types of filters were consecutively applied: a median blur filter to extract relatively larger voids, a morphological opening operation filter for small dot-shaped voids and a morphological closing operation filter for small voids with concave contrasts. Three data cubes separately processed by the above-mentioned filters were integrated by a union operation to the final unified volume data, which confirmed the correct extraction of the voids over the entire dimension contained in the original data. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Document image analysis: A primer

    Indian Academy of Sciences (India)

    Rangachar Kasturi; Lawrence O’Gorman; Venu Govindaraju

    2002-02-01

    Document image analysis refers to algorithms and techniques that are applied to images of documents to obtain a computer-readable description from pixel data. A well-known document image analysis product is the Optical Character Recognition (OCR) software that recognizes characters in a scanned document. OCR makes it possible for the user to edit or search the document’s contents. In this paper we briefly describe various components of a document analysis system. Many of these basic building blocks are found in most document analysis systems, irrespective of the particular domain or language to which they are applied. We hope that this paper will help the reader by providing the background necessary to understand the detailed descriptions of specific techniques presented in other papers in this issue.

  19. Onboard Image Processing System for Hyperspectral Sensor

    Directory of Open Access Journals (Sweden)

    Hiroki Hihara

    2015-09-01

    Full Text Available Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS, which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.

  20. Onboard Image Processing System for Hyperspectral Sensor.

    Science.gov (United States)

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-09-25

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.

  1. 3D integral imaging with optical processing

    Science.gov (United States)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  2. Digital image processing of vascular angiograms

    Science.gov (United States)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  3. Design Criteria For Networked Image Analysis System

    Science.gov (United States)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  4. SIP: A Web-Based Astronomical Image Processing Program

    Science.gov (United States)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  5. Speckle pattern processing by digital image correlation

    Directory of Open Access Journals (Sweden)

    Gubarev Fedor

    2016-01-01

    Full Text Available Testing the method of speckle pattern processing based on the digital image correlation is carried out in the current work. Three the most widely used formulas of the correlation coefficient are tested. To determine the accuracy of the speckle pattern processing, test speckle patterns with known displacement are used. The optimal size of a speckle pattern template used for determination of correlation and corresponding the speckle pattern displacement is also considered in the work.

  6. Applying analysis of the gestalt theory in the digital image processing%格式塔理论在数字图像处理中的应用

    Institute of Scientific and Technical Information of China (English)

    程焱辉; 张立; 王必安

    2012-01-01

    With the gestaltt theory and the digital image processing technology,the article analyses the application of the digital image processing with some of the interrelated conclusion of the gestalt visual psychology, advances the method of the digital image fragmenting and the digital image rebuild based on the gestalt visual psychology, strives to reach the consistence of the objective image algorthm and the subjective experience of the human, probes into the subjective method of the digital image quality estimating.%结合格式塔心理学理论与数字图像处理技术,分析了格式塔视觉心理的相关结论在图像处理方面的一些应用,提出了基于格式塔视觉心理的图像分割和图像重建方法,力求达到图像客观算法与人的主观感受的一致性,探讨图像质量评价的主观方法.

  7. An Automatic Number Plate Recognition System under Image Processing

    Directory of Open Access Journals (Sweden)

    Sarbjit Kaur

    2016-03-01

    Full Text Available Automatic Number Plate Recognition system is an application of computer vision and image processing technology that takes photograph of vehicles as input image and by extracting their number plate from whole vehicle image , it display the number plate information into text. Mainly the ANPR system consists of 4 phases: - Acquisition of Vehicle Image and Pre-Processing, Extraction of Number Plate Area, Character Segmentation and Character Recognition. The overall accuracy and efficiency of whole ANPR system depends on number plate extraction phase as character segmentation and character recognition phases are also depend on the output of this phase. Further the accuracy of Number Plate Extraction phase depends on the quality of captured vehicle image. Higher be the quality of captured input vehicle image more will be the chances of proper extraction of vehicle number plate area. The existing methods of ANPR works well for dark and bright/light categories image but it does not work well for Low Contrast, Blurred and Noisy images and the detection of exact number plate area by using the existing ANPR approach is not successful even after applying existing filtering and enhancement technique for these types of images. Due to wrong extraction of number plate area, the character segmentation and character recognition are also not successful in this case by using the existing method. To overcome these drawbacks I proposed an efficient approach for ANPR in which the input vehicle image is pre-processed firstly by iterative bilateral filtering , adaptive histogram equalization and number plate is extracted from pre-processed vehicle image using morphological operations, image subtraction, image binarization/thresholding, sobel vertical edge detection and by boundary box analysis. Sometimes the extracted plate area also contains noise, bolts, frames etc. So the extracted plate area is enhanced by using morphological operations to improve the quality of

  8. Optimisation in signal and image processing

    CERN Document Server

    Siarry, Patrick

    2010-01-01

    This book describes the optimization methods most commonly encountered in signal and image processing: artificial evolution and Parisian approach; wavelets and fractals; information criteria; training and quadratic programming; Bayesian formalism; probabilistic modeling; Markovian approach; hidden Markov models; and metaheuristics (genetic algorithms, ant colony algorithms, cross-entropy, particle swarm optimization, estimation of distribution algorithms, and artificial immune systems).

  9. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  10. Principal Components Analysis In Medical Imaging

    Science.gov (United States)

    Weaver, J. B.; Huddleston, A. L.

    1986-06-01

    Principal components analysis, PCA, is basically a data reduction technique. PCA has been used in several problems in diagnostic radiology: processing radioisotope brain scans (Ref.1), automatic alignment of radionuclide images (Ref. 2), processing MRI images (Ref. 3,4), analyzing first-pass cardiac studies (Ref. 5) correcting for attenuation in bone mineral measurements (Ref. 6) and in dual energy x-ray imaging (Ref. 6,7). This paper will progress as follows; a brief introduction to the mathematics of PCA will be followed by two brief examples of how PCA has been used in the literature. Finally my own experience with PCA in dual-energy x-ray imaging will be given.

  11. Medical image analysis with artificial neural networks.

    Science.gov (United States)

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Body hair counts during hair length reduction procedures: a comparative study between Computer Assisted Image Analysis after Manual Processing (CAIAMP) and Trichoscan(™).

    Science.gov (United States)

    Van Neste, D J J

    2015-08-01

    To compare two measurement methods for body hair. Calibration of computer assisted image analysis after manual processing (CAIAMP) showed variation hair and skin' were taken before hair dye, after hair dye or after hair length reduction without hair extraction or destruction. Data in the same targets were compared with Trichoscan(™) quoted for 'unambiguous evaluation of the hair growth after shaving'. CAIAMP detected a total of 337 hair and showed no statistically significant differences with the three procedures confirming 'good natural contrast between hair and skin' and that reduction methods did not affect hair counts. While CAIAMP found a mean number of 19 thick hair (≥30 μm) before dye, 18 after dye and 20 after hair reduction, Trichoscan(™) found in the same sites respectively 44, 73 and 61. Trichoscan(™) generated counts differed statistically significantly from CAIAMP-data. Automated analyses were considered un-specifically influenced by hair medulla and natural or artificial skin background. Quality control including all steps of human intervention and measurement technology are mandatory for body hair measurements during experimental or clinical trials on body hair grooming, shaving or removal. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Image processing techniques for passive millimeter-wave imaging

    Science.gov (United States)

    Lettington, Alan H.; Gleed, David G.

    1998-08-01

    We present our results on the application of image processing techniques for passive millimeter-wave imaging and discuss possible future trends. Passive millimeter-wave imaging is useful in poor weather such as in fog and cloud. Its spatial resolution, however, can be restricted due to the diffraction limit of the front aperture. Its resolution may be increased using super-resolution techniques but often at the expense of processing time. Linear methods may be implemented in real time but non-linear methods which are required to restore missing spatial frequencies are usually more time consuming. In the present paper we describe fast super-resolution techniques which are potentially capable of being applied in real time. Associated issues such as reducing the influence of noise and improving recognition capability will be discussed. Various techniques have been used to enhance passive millimeter wave images giving excellent results and providing a significant quantifiable increase in spatial resolution. Examples of applying these techniques to imagery will be given.

  14. Subband/transform functions for image processing

    Science.gov (United States)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  15. New approaches in intelligent image analysis techniques, methodologies and applications

    CERN Document Server

    Nakamatsu, Kazumi

    2016-01-01

    This book presents an Introduction and 11 independent chapters, which are devoted to various new approaches of intelligent image processing and analysis. The book also presents new methods, algorithms and applied systems for intelligent image processing, on the following basic topics: Methods for Hierarchical Image Decomposition; Intelligent Digital Signal Processing and Feature Extraction; Data Clustering and Visualization via Echo State Networks; Clustering of Natural Images in Automatic Image Annotation Systems; Control System for Remote Sensing Image Processing; Tissue Segmentation of MR Brain Images Sequence; Kidney Cysts Segmentation in CT Images; Audio Visual Attention Models in Mobile Robots Navigation; Local Adaptive Image Processing; Learning Techniques for Intelligent Access Control; Resolution Improvement in Acoustic Maps. Each chapter is self-contained with its own references. Some of the chapters are devoted to the theoretical aspects while the others are presenting the practical aspects and the...

  16. Bitplane Image Coding With Parallel Coefficient Processing.

    Science.gov (United States)

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible.

  17. [Digital thoracic radiology: devices, image processing, limits].

    Science.gov (United States)

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  18. Tracking image tampering by reverse processing

    Institute of Scientific and Technical Information of China (English)

    Zhao Xianfeng; Chen Kefei; Wang Weinong

    2005-01-01

    To enhance the performance of image authentication, a new fragile watermarking scheme, which exploits the perturbation in reverse processing, is proposed. In verifying the integrity of image contents, the method performs the reverse processing of watermarking. Typically, it de-filters the distributed version or solves an embedding equation instead of really extracting the watermark. If any tampering happened, the output should be perturbed violently because such processing enlarges the observation error, which can be regarded as the consequence of illegal manipulation. The drastically perturbed values imply the existence of tampering, and their positions directly draw the shapes of the manipulated areas. Compared with the mostly used block-based watermarking, the method localizes the tampering almost pixel-wise. It also supports the adaptive embedding, which keeps the perceptual quality better, and avoids the vulnerabilities resulting from the block-based approaches.

  19. Image processing with JPEG2000 coders

    Science.gov (United States)

    Śliwiński, Przemysław; Smutnicki, Czesław; Chorażyczewski, Artur

    2008-04-01

    In the note, several wavelet-based image processing algorithms are presented. Denoising algorithm is derived from the Donoho's thresholding. Rescaling algorithm reuses sub-division scheme of the Sweldens' lifting and a sensor linearization procedure exploiting system identification algorithms developed for nonlinear dynamic systems. Proposed autofocus algorithm is a passive one, works in wavelet domain and relies on properties of lens transfer function. The common advantage of the algorithms is that they can easily be implemented within the JPEG2000 image compression standard encoder, offering simplification of the final circuitry (or the software package) and the reduction of the power consumption (program size, respectively) when compared to solutions based on separate components.

  20. Maintenance Process Strategic Analysis

    Science.gov (United States)

    Jasiulewicz-Kaczmarek, M.; Stachowiak, A.

    2016-08-01

    The performance and competitiveness of manufacturing companies is dependent on the availability, reliability and productivity of their production facilities. Low productivity, downtime, and poor machine performance is often linked to inadequate plant maintenance, which in turn can lead to reduced production levels, increasing costs, lost market opportunities, and lower profits. These pressures have given firms worldwide the motivation to explore and embrace proactive maintenance strategies over the traditional reactive firefighting methods. The traditional view of maintenance has shifted into one of an overall view that encompasses Overall Equipment Efficiency, Stakeholders Management and Life Cycle assessment. From practical point of view it requires changes in approach to maintenance represented by managers and changes in actions performed within maintenance area. Managers have to understand that maintenance is not only about repairs and conservations of machines and devices, but also actions striving for more efficient resources management and care for safety and health of employees. The purpose of the work is to present strategic analysis based on SWOT analysis to identify the opportunities and strengths of maintenance process, to benefit from them as much as possible, as well as to identify weaknesses and threats, so that they could be eliminated or minimized.

  1. Image processing for improved eye-tracking accuracy

    Science.gov (United States)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  2. Image processing for improved eye-tracking accuracy

    Science.gov (United States)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  3. Fast image analysis in polarization SHG microscopy.

    Science.gov (United States)

    Amat-Roldan, Ivan; Psilodimitrakopoulos, Sotiris; Loza-Alvarez, Pablo; Artigas, David

    2010-08-02

    Pixel resolution polarization-sensitive second harmonic generation (PSHG) imaging has been recently shown as a promising imaging modality, by largely enhancing the capabilities of conventional intensity-based SHG microscopy. PSHG is able to obtain structural information from the elementary SHG active structures, which play an important role in many biological processes. Although the technique is of major interest, acquiring such information requires long offline processing, even with current computers. In this paper, we present an approach based on Fourier analysis of the anisotropy signature that allows processing the PSHG images in less than a second in standard single core computers. This represents a temporal improvement of several orders of magnitude compared to conventional fitting algorithms. This opens up the possibility for fast PSHG information with the subsequent benefit of potential use in medical applications.

  4. Quantitative histogram analysis of images

    Science.gov (United States)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  5. Digital image processing on a small computer system

    Science.gov (United States)

    Danielson, R.

    1981-01-01

    A minicomputer-based image processing facility provides a relatively low-cost entry point for education about image analysis applications in remote sensing. While a minicomputer has sufficient processing power to produce results quite rapidly for low volumes of small images, it does not have sufficient power to perform CPU- or 1/0-bound tasks on large images. A system equipped with a display terminal is ideally suited for interactive tasks. Software procurement is a limiting factor for most end users, and software availability may well be the overriding consideration in selecting a particular hardware configuration. The hardware chosen should be selected to be compatible with the software and with concern for future expansion.

  6. Data processing for registered multimodal images and its clinical application

    Energy Technology Data Exchange (ETDEWEB)

    Toyama, Hinako [Tokyo Metropolitan Inst. of Gerontology (Japan); Kobayashi, Akio; Uemura, Kouji

    1998-05-01

    We have developed two kinds of data processing methods for co-registered PET and MR images. The 3D-brain surface, representing the cortical rim in the transaxial images, was projected on a 2D-plane by utilizing Mollweide projection, which is an area-conserving method of displaying the globe as a world map. A quantitative ROI analysis on the brain surface and 3D superimposed surface display were performed by means of the 2D projection image. A clustered brain image was created by referring to the clustered 3D correlation map of resting CBF, the acetazolamide response and the hyperventilatory response, where each pixel in the brain was labeled with the color representing its cluster number. With this method, the stage of hemodynamic deficiency was evaluated in a patient with the occlusion of internal carotid artery. The differences in the brain images obtained before and after revascularized surgery was also evaluated. (author)

  7. AUV Local Path Planning Based on Acoustic Image Processing

    Institute of Scientific and Technical Information of China (English)

    LI Ye; CHANG Wen-tian; JIANG Da-peng; ZHANG Tie-dong; SU Yu-min

    2006-01-01

    The forward-looking image sonar is a necessary vision device for Autonomous Underwater Vehicles (AUV). Based on the acoustic image received from forward-looking image sonar, AUV local path is planned. When the environment model is made to adapt to local path planning, an iterative algorithm of binary conversion is used for image segmentation. Raw data of the acoustic image, which were received from serial port, are processed. By the use of "Mathematic Morphology" to filter noise, a mathematic model of environment for local path planning is established after coordinate transformation. The optimal path is searched by the distant transmission (Dt) algorithm. Simulation is conducted for the analysis of the algorithm. Experiment on the sea proves it reliable.

  8. ISAR Echoes Coherent Processing and Imaging

    Institute of Scientific and Technical Information of China (English)

    XINGMengdao; LANJinqiao; BAOZheng; LIAOGuisheng

    2004-01-01

    The general approach to ISAR imaging is Range-doppler (RD) imaging approach. For this approach,the Translational motion compensation (TMC) is firstly obtained by envelope alignment and autofocus, so the target can be treated as a rotating target for the next processing. But in this method, scatterers' Migration through resolution cells (MTRC) caused by rotational motion is neglected. However in practice, MTRC exists with the improvement of resolution or for big target. For MTRC compensation, the keystone transformation in SAR is used inthis paper. Before the keystone transformation, it is demanded that the rawdata is coherent, while in fact, the ISAR rawdata is usually not. So a coherent processing of rawdata is proposed. In this paper, the coherent processing of rawdata is firstly done and the next step is to correct MTRC. After using multi-component Amplitude modulation and Linear frequency modulation (AM-LFM) parameter estimation method, the Range-Instantaneous Doppler(RID) ISAR image is obtained. The effectiveness of this algorithm is validated by the processing of simulation data.

  9. Multiresolution approach to processing images for different applications interaction of lower processing with higher vision

    CERN Document Server

    Vujović, Igor

    2015-01-01

    This book presents theoretical and practical aspects of the interaction between low and high level image processing. Multiresolution analysis owes its popularity mostly to wavelets and is widely used in a variety of applications. Low level image processing is important for the performance of many high level applications. The book includes examples from different research fields, i.e. video surveillance; biomedical applications (EMG and X-ray); improved communication, namely teleoperation, telemedicine, animation, augmented/virtual reality and robot vision; monitoring of the condition of ship systems and image quality control.

  10. Predictive images of postoperative levator resection outcome using image processing software

    Directory of Open Access Journals (Sweden)

    Mawatari Y

    2016-09-01

    Full Text Available Yuki Mawatari,1 Mikiko Fukushima2 1Igo Ophthalmic Clinic, Kagoshima, 2Department of Ophthalmology, Faculty of Life Science, Kumamoto University, Chuo-ku, Kumamoto, Japan Purpose: This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection.Methods: Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection. Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®. Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery.Results: Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2% were satisfied with their postoperative appearances, and 55 patients (84.8% positively responded to the usefulness of processed images to predict postoperative appearance.Conclusion: Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. Keywords: levator resection, blepharoptosis, image processing, Adobe Photoshop® 

  11. Analysis of tectonic-controlled fluvial morphology and sedimentary processes of the western Amazon Basin: an approach using satellite images and digital elevation model

    Directory of Open Access Journals (Sweden)

    Clauzionor L. Silva

    2007-12-01

    Full Text Available An investigation of the tectonic controls of the fluvial morphology and sedimentary processes of an area located southwest of Manaus in the Amazon Basin was conducted using orbital remote sensing data. In this region, low topographic gradients represent a major obstacle for morphotectonic analysis using conventional methods. The use of remote sensing data can contribute significantly to overcome this limitation. In this instance, remote sensing data comprised digital elevation model (DEM acquired by the Shuttle Radar Topographic Mission (SRTM and Landsat Thematic Mapper images. Advanced image processing techniques were employed for enhancing the topographic textures and providing a three-dimensional visualization, hence allowing interpretation of the morphotectonic elements. This led to the recognition of main tectonic compartments and several morphostructural features and landforms related to the neotectonic evolution of this portion of the Amazon Basin. Features such as fault scarps, anomalous drainage patterns, aligned ridges, spurs and valleys, are expressed in the enhanced images as conspicuous lineaments along NE-SW, NW-SE, E-W and N-S directions. These features are associated to the geometry of alternated horst and graben structures, the latter filled by recent sedimentary units. Morphotectonic interpretation using this approach has proven to be efficient and permitted to recognize new tectonic features that were named Asymmetric Ariaú Graben, Rombohedral Manacapuru Basin and Castanho-Mamori Graben.Uma investigação do controle tectônico da morfologia fluvial e dos processos sedimentares de uma área localizada a sudoeste da cidade de Manaus, na Bacia do Amazonas, foi conduzida a partir do uso de dados de sensores remotos orbitais. Nessa região, o baixo gradiente topográfico representa o principal obstáculo para a análise morfotectônica usando métodos convencionais. O uso de dados de sensores remotos pode contribuir

  12. Computational information geometry for image and signal processing

    CERN Document Server

    Critchley, Frank; Dodson, Christopher

    2017-01-01

    This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.

  13. Signal and image multiresolution analysis

    CERN Document Server

    Ouahabi, Abdelialil

    2012-01-01

    Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing

  14. Multi-Source Image Analysis.

    Science.gov (United States)

    1979-12-01

    Laboratories, Fort Belvoir, Virginia. Estes, J. E., and L. W. Senger (eds.), 1974, Remote Sensing: Techniques for environmental analysis, Hamilton, Santa ...E. and W. Senger (eds.), Remote Sensing Techniques in Environmental Analysis, Santa Barbara, California, Hamilton Publishing Co., p. 127-165. Morain...The large body of water labeled "W" on each image represents the Agua Hedionda lagoon. East of the lagoon the area is primarily agricultural with a

  15. Teaching image analysis at DIKU

    DEFF Research Database (Denmark)

    Johansen, Peter

    2010-01-01

    The early development of computer vision at Department of Computer Science at University of Copenhagen (DIKU) is briefly described. The different disciplines in computer vision are introduced, and the principles for teaching two courses, an image analysis course, and a robot lab class are outlined....

  16. Multidimensional energy operator for image processing

    Science.gov (United States)

    Maragos, Petros; Bovik, Alan C.; Quatieri, Thomas F.

    1992-11-01

    The 1-D nonlinear differential operator (Psi) (f) equals (f')2 - ff' has been recently introduced to signal processing and has been found very useful for estimating the parameters of sinusoids and the modulating signals of AM-FM signals. It is called an energy operator because it can track the energy of an oscillator source generating a sinusoidal signal. In this paper we introduce the multidimensional extension (Phi) (f) equals (parallel)DELf(parallel)2 - fDEL2f of the 1-D energy operator and briefly outline some of its applications to image processing. We discuss some interesting properties of the multidimensional operator and develop demodulation algorithms to estimate the amplitude envelope and instantaneous frequencies of 2-D spatially-varying AM-FM signals, which can model image texture. The attractive features of the multidimensional operator and the related amplitude/frequency demodulation algorithms are their simplicity, efficiency, and ability to track instantaneously- varying spatial modulation patterns.

  17. Image Processing Algorithms – A Comprehensive Study

    Directory of Open Access Journals (Sweden)

    Mahesh Prasanna K

    2014-06-01

    Full Text Available Digital image processing is an ever expanding and dynamic area with applications reaching out into our everyday life such as medicine, space exploration, surveillance, authentication, automated industry inspection and many more areas. These applications involve different processes like image enhancement and object detection [1]. Implementing such applications on a general purpose computer can be easier, but not very time efficient due to additional constraints on memory and other peripheral devices. Application specific hardware implementation offers much greater speed than a software implementation. With advances in the VLSI (Very Large Scale Integrated technology hardware implementation has become an attractive alternative. Implementing complex computation tasks on hardware and by exploiting parallelism and pipelining in algorithms yield significant reduction in execution times [2].

  18. Independent Validation and Verification of Process Design and Optimization Technology Diagnostic and Control of Natural Gas Fired Furnaces via Flame Image Analysis Technology

    Energy Technology Data Exchange (ETDEWEB)

    Cox, Daryl [ORNL

    2009-05-01

    The United States Department of Energy, Industrial Technologies Program has invested in emerging Process Design and Optimizations Technologies (PDOT) to encourage the development of new initiatives that might result in energy savings in industrial processes. Gas fired furnaces present a harsh environment, often making accurate determination of correct air/fuel ratios a challenge. Operation with the correct air/fuel ratio and especially with balanced burners in multi-burner combustion equipment can result in improved system efficiency, yielding lower operating costs and reduced emissions. Flame Image Analysis offers a way to improve individual burner performance by identifying and correcting fuel-rich burners. The anticipated benefit of this technology is improved furnace thermal efficiency, and lower NOx emissions. Independent validation and verification (V&V) testing of the FIA technology was performed at Missouri Forge, Inc., in Doniphan, Missouri by Environ International Corporation (V&V contractor) and Enterprise Energy and Research (EE&R), the developer of the technology. The test site was selected by the technology developer and accepted by Environ after a meeting held at Missouri Forge. As stated in the solicitation for the V&V contractor, 'The objective of this activity is to provide independent verification and validation of the performance of this new technology when demonstrated in industrial applications. A primary goal for the V&V process will be to independently evaluate if this technology, when demonstrated in an industrial application, can be utilized to save a significant amount of the operating energy cost. The Seller will also independently evaluate the other benefits of the demonstrated technology that were previously identified by the developer, including those related to product quality, productivity, environmental impact, etc'. A test plan was provided by the technology developer and is included as an appendix to the summary report

  19. Digital signal and image processing using MATLAB

    CERN Document Server

    Blanchet, Gérard

    2006-01-01

    This title provides the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications.More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.

  20. Color Image Processing and Object Tracking System

    Science.gov (United States)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  1. Astronomical Image Processing with Array Detectors

    CERN Document Server

    Houde, Martin

    2007-01-01

    We address the question of astronomical image processing from data obtained with array detectors. We define and analyze the cases of evenly, regularly, and irregularly sampled maps for idealized (i.e., infinite) and realistic (i.e., finite) detectors. We concentrate on the effect of interpolation on the maps, and the choice of the kernel used to accomplish this task. We show how the normalization intrinsic to the interpolation process must be carefully accounted for when dealing with irregularly sampled grids. We also analyze the effect of missing or dead pixels in the array, and their consequences for the Nyquist sampling criterion.

  2. MUTUAL IMAGE TRANSFORMATION ALGORITHMS FOR VISUAL INFORMATION PROCESSING AND RETRIEVAL

    Directory of Open Access Journals (Sweden)

    G. A. Kukharev

    2017-01-01

    Full Text Available Subject of Research. The paper deals with methods and algorithms for mutual transformation of related pairs of images in order to enhance the capabilities of cross-modal multimedia retrieval (CMMR technologies. We have thoroughly studied the problem of mutual transformation of face images of various kinds (e.g. photos and drawn pictures. This problem is widely represented in practice. Research is this area is based on existing datasets. The algorithms we have proposed in this paper can be applied to arbitrary pairs of related images due to the unified mathematical specification. Method. We have presented three image transformation algorithms. The first one is based on principal component analysis and Karhunen-Loève transform (1DPCA/1DKLT. Unlike the existing solution, it does not use the training set during the transformation process. The second algorithm assumes generation of an image population. The third algorithm performs the transformation based on two-dimensional principal component analysis and Karhunen-Loève transform (2DPCA/2DKLT. Main Results. The experiments on image transformation and population generation have revealed the main features of each algorithm. The first algorithm allows construction of an accurate and stable model of transition between two given sets of images. The second algorithm can be used to add new images to existing bases and the third algorithm is capable of performing the transformation outside the training dataset. Practical Relevance. Taking into account the qualities of the proposed algorithms, we have provided recommendations concerning their application. Possible scenarios include construction of a transition model for related pairs of images, mutual transformation of the images inside and outside the dataset as well as population generation in order to increase representativeness of existing datasets. Thus, the proposed algorithms can be used to improve reliability of face recognition performed on images

  3. Cancer detection by quantitative fluorescence image analysis.

    Science.gov (United States)

    Parry, W L; Hemstreet, G P

    1988-02-01

    Quantitative fluorescence image analysis is a rapidly evolving biophysical cytochemical technology with the potential for multiple clinical and basic research applications. We report the application of this technique for bladder cancer detection and discuss its potential usefulness as an adjunct to methods used currently by urologists for the diagnosis and management of bladder cancer. Quantitative fluorescence image analysis is a cytological method that incorporates 2 diagnostic techniques, quantitation of nuclear deoxyribonucleic acid and morphometric analysis, in a single semiautomated system to facilitate the identification of rare events, that is individual cancer cells. When compared to routine cytopathology for detection of bladder cancer in symptomatic patients, quantitative fluorescence image analysis demonstrated greater sensitivity (76 versus 33 per cent) for the detection of low grade transitional cell carcinoma. The specificity of quantitative fluorescence image analysis in a small control group was 94 per cent and with the manual method for quantitation of absolute nuclear fluorescence intensity in the screening of high risk asymptomatic subjects the specificity was 96.7 per cent. The more familiar flow cytometry is another fluorescence technique for measurement of nuclear deoxyribonucleic acid. However, rather than identifying individual cancer cells, flow cytometry identifies cellular pattern distributions, that is the ratio of normal to abnormal cells. Numerous studies by others have shown that flow cytometry is a sensitive method to monitor patients with diagnosed urological disease. Based upon results in separate quantitative fluorescence image analysis and flow cytometry studies, it appears that these 2 fluorescence techniques may be complementary tools for urological screening, diagnosis and management, and that they also may be useful separately or in combination to elucidate the oncogenic process, determine the biological potential of tumors

  4. Automatic segmentation of blood vessels from retinal fundus images through image processing and data mining techniques

    Indian Academy of Sciences (India)

    R Geetharamani; Lakshmi Balasubramanian

    2015-09-01

    Machine Learning techniques have been useful in almost every field of concern. Data Mining, a branch of Machine Learning is one of the most extensively used techniques. The ever-increasing demands in the field of medicine are being addressed by computational approaches in which Big Data analysis, image processing and data mining are on top priority. These techniques have been exploited in the domain of ophthalmology for better retinal fundus image analysis. Blood vessels, one of the most significant retinal anatomical structures are analysed for diagnosis of many diseases like retinopathy, occlusion and many other vision threatening diseases. Vessel segmentation can also be a pre-processing step for segmentation of other retinal structures like optic disc, fovea, microneurysms, etc. In this paper, blood vessel segmentation is attempted through image processing and data mining techniques. The retinal blood vessels were segmented through color space conversion and color channel extraction, image pre-processing, Gabor filtering, image postprocessing, feature construction through application of principal component analysis, k-means clustering and first level classification using Naïve–Bayes classification algorithm and second level classification using C4.5 enhanced with bagging techniques. Association of every pixel against the feature vector necessitates Big Data analysis. The proposed methodology was evaluated on a publicly available database, STARE. The results reported 95.05% accuracy on entire dataset; however the accuracy was 95.20% on normal images and 94.89% on pathological images. A comparison of these results with the existing methodologies is also reported. This methodology can help ophthalmologists in better and faster analysis and hence early treatment to the patients.

  5. Hyperspectral Image Analysis of Food Quality

    DEFF Research Database (Denmark)

    Arngren, Morten

    Assessing the quality of food is a vital step in any food processing line to ensurethe best food quality and maximum profit for the farmer and food manufacturer.Traditional quality evaluation methods are often destructive and labourintensive procedures relying on wet chemistry or subjective human...... inspection.Near-infrared spectroscopy can address these issues by offering a fast and objectiveanalysis of the food quality. A natural extension to these single spectrumNIR systems is to include image information such that each pixel holds a NIRspectrum. This augmented image information offers several...... extensions to the analysis offood quality. This dissertation is concerned with hyperspectral image analysisused to assess the quality of single grain kernels. The focus is to highlight thebenefits and challenges of using hyperspectral imaging for food quality presentedin two research directions. Initially...

  6. High Throughput Multispectral Image Processing with Applications in Food Science.

    Science.gov (United States)

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  7. High Throughput Multispectral Image Processing with Applications in Food Science.

    Directory of Open Access Journals (Sweden)

    Panagiotis Tsakanikas

    Full Text Available Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  8. Pain related inflammation analysis using infrared images

    Science.gov (United States)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  9. 应用ImageJ程序包进行材料显微组织图像处理、分析和可视化的一种框架%A framework for image processing,analysis and visualization of materials microstructures using ImageJ package

    Institute of Scientific and Technical Information of China (English)

    Asad Ullah; 刘国权; 王浩; Dil Faraz Khan; Matiullah Khan

    2012-01-01

    显微组织图像(例如胞、粒子与晶粒等)的数字图像处理、分割和分析,对于获取显微组织特征的三维信息非常重要.已有数种商用和共享程序包可以用于图像的处理和分析.“ImageJ”即其中之一,其长期广泛采用及其可扩展插件形式已使其成为许多不同应用领域科学家选用的工具.它包含了处理、分割、重建和可视化材料显微结构所需要的几乎所有基本的和最新的功能以及图像分析工具(例如‘Particle Analyzer’,‘3D object counter’,‘3D Roi Manager’),以用于成组粒子的复杂统计处理.虽然它在生物医学研究领域很受欢迎,被认为是一种有用的和有效的开放源码的图像处理与分析软件,但是在材料科学领域对其所知甚少.面向材料学界,本文尤其是那些没有图像处理和分析经验的材料科学与工程专业人员,在简要介绍ImageJ的基础上,提出了将其应用于在三维空间中处理、分割和分析显微组织结构连续切片图像的一个通用框架.%Digital image processing,segmentation and analysis of microstructural images are crucial to obtain three dimensional (3D) information about the features present in microstructure such as particles or grains.There are several commercial as well as public domain packages availible for processing and analysis of images; "ImageJ" is one of them whose wide adoption,long existence and extensible plugin style has made it a tool of choice for scientists from a broad range of disciplines.It contains almost all of the basic and latest functionalities required to process,segment,reconstruct and visualize materials microstructural images along with analysis tools (for instance ‘Particle Analyzer',‘3D object counter',‘3D Roi Manager' and so on) for sophisticated statistical processing of groups of particles.Although it is very popular and is considered to be an useful and efficient open source image processing and analysis

  10. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)

    2016-10-15

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  11. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    Science.gov (United States)

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  12. Piecewise flat embeddings for hyperspectral image analysis

    Science.gov (United States)

    Hayes, Tyler L.; Meinhold, Renee T.; Hamilton, John F.; Cahill, Nathan D.

    2017-05-01

    Graph-based dimensionality reduction techniques such as Laplacian Eigenmaps (LE), Local Linear Embedding (LLE), Isometric Feature Mapping (ISOMAP), and Kernel Principal Components Analysis (KPCA) have been used in a variety of hyperspectral image analysis applications for generating smooth data embeddings. Recently, Piecewise Flat Embeddings (PFE) were introduced in the computer vision community as a technique for generating piecewise constant embeddings that make data clustering / image segmentation a straightforward process. In this paper, we show how PFE arises by modifying LE, yielding a constrained ℓ1-minimization problem that can be solved iteratively. Using publicly available data, we carry out experiments to illustrate the implications of applying PFE to pixel-based hyperspectral image clustering and classification.

  13. Comparative Analysis of Various Image Fusion Techniques For Biomedical Images: A Review

    Directory of Open Access Journals (Sweden)

    Nayera Nahvi,

    2014-05-01

    Full Text Available Image Fusion is a process of combining the relevant information from a set of images, into a single image, wherein the resultant fused image will be more informative and complete than any of the input images. This paper discusses implementation of DWT technique on different images to make a fused image having more information content. As DWT is the latest technique for image fusion as compared to simple image fusion and pyramid based image fusion, so we are going to implement DWT as the image fusion technique in our paper. Other methods such as Principal Component Analysis (PCA based fusion, Intensity hue Saturation (IHS Transform based fusion and high pass filtering methods are also discussed. A new algorithm is proposed using Discrete Wavelet transform and different fusion techniques including pixel averaging, min-max and max-min methods for medical image fusion. KEYWORDS:

  14. Image sequence processing for videowall visualization

    Science.gov (United States)

    Skarabot, Alessandro; Ramponi, Giovanni; Toffoli, Domenico

    2000-03-01

    A new processing scheme for large high-resolution displays such as Videowalls is proposed in this paper. The scheme consists in a deinterlacing, an interpolation and an optional enhancement algorithm; its hardware implementation requires a low computational cost. The deinterlacing algorithm is motion- adaptive. A simple hierarchical three-level motion detector provides indications of static, slow and fast motion to activate a temporal FIR filter, a three-tap vertico-temporal median operator and a spatial FIR filter respectively. This simple algorithm limits the hardware requirements to three field memories plus a very reduced number of algebraic operations per interpolated pixel. Usually linear techniques such as pixel repetition or the bilinear method are employed for image interpolation, which however either introduce artifacts (e.g. blocking effects) or tend to smooth edges. A higher quality rendition of the image is obtained by the concept of the Warped Distance among the pixels of an image. The computational load of the proposed approach is very small if compared to that of state-of-the-art nonlinear interpolation operators. Finally the contrast enhancement algorithm is a modified Unsharp Masking technique: a polynomial function is added to modulate the sharpening signal, which allows to discriminate between noise and signal and, at the same time, provides an appropriate amplification to low-contrast image details.

  15. CMOS Image Sensor with On-Chip Image Compression: A Review and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Milin Zhang

    2010-01-01

    Full Text Available Demand for high-resolution, low-power sensing devices with integrated image processing capabilities, especially compression capability, is increasing. CMOS technology enables the integration of image sensing and image processing, making it possible to improve the overall system performance. This paper reviews the current state of the art in CMOS image sensors featuring on-chip image compression. Firstly, typical sensing systems consisting of separate image-capturing unit and image-compression processing unit are reviewed, followed by systems that integrate focal-plane compression. The paper also provides a thorough review of a new design paradigm, in which image compression is performed during the image-capture phase prior to storage, referred to as compressive acquisition. High-performance sensor systems reported in recent years are also introduced. Performance analysis and comparison of the reported designs using different design paradigm are presented at the end.

  16. The effect of image processing on the detection of cancers in digital mammography.

    Science.gov (United States)

    Warren, Lucy M; Given-Wilson, Rosalind M; Wallis, Matthew G; Cooke, Julie; Halling-Brown, Mark D; Mackenzie, Alistair; Chakraborty, Dev P; Bosmans, Hilde; Dance, David R; Young, Kenneth C

    2014-08-01

    OBJECTIVE. The objective of our study was to investigate the effect of image processing on the detection of cancers in digital mammography images. MATERIALS AND METHODS. Two hundred seventy pairs of breast images (both breasts, one view) were collected from eight systems using Hologic amorphous selenium detectors: 80 image pairs showed breasts containing subtle malignant masses; 30 image pairs, biopsy-proven benign lesions; 80 image pairs, simulated calcification clusters; and 80 image pairs, no cancer (normal). The 270 image pairs were processed with three types of image processing: standard (full enhancement), low contrast (intermediate enhancement), and pseudo-film-screen (no enhancement). Seven experienced observers inspected the images, locating and rating regions they suspected to be cancer for likelihood of malignancy. The results were analyzed using a jackknife-alternative free-response receiver operating characteristic (JAFROC) analysis. RESULTS. The detection of calcification clusters was significantly affected by the type of image processing: The JAFROC figure of merit (FOM) decreased from 0.65 with standard image processing to 0.63 with low-contrast image processing (p = 0.04) and from 0.65 with standard image processing to 0.61 with film-screen image processing (p = 0.0005). The detection of noncalcification cancers was not significantly different among the image-processing types investigated (p > 0.40). CONCLUSION. These results suggest that image processing has a significant impact on the detection of calcification clusters in digital mammography. For the three image-processing versions and the system investigated, standard image processing was optimal for the detection of calcification clusters. The effect on cancer detection should be considered when selecting the type of image processing in the future.

  17. Automated image analysis techniques for cardiovascular magnetic resonance imaging

    NARCIS (Netherlands)

    Geest, Robertus Jacobus van der

    2011-01-01

    The introductory chapter provides an overview of various aspects related to quantitative analysis of cardiovascular MR (CMR) imaging studies. Subsequently, the thesis describes several automated methods for quantitative assessment of left ventricular function from CMR imaging studies. Several novel

  18. Imprecise Arithmetic for Low Power Image Processing

    DEFF Research Database (Denmark)

    Albicocco, Pietro; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2012-01-01

    Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, additio...... and multiplication, in an imprecise manner by simplifying the hardware implementation. With the proposed ”sloppy” operations, we obtain a reduction in delay, area and power dissipation, and the error introduced is still acceptable for applications such as image processing.......Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, addition...

  19. Using the medical image processing package, ImageJ, for astronomy

    CERN Document Server

    West, J L; West, Jennifer L.; Cameron, Ian D.

    2006-01-01

    At the most fundamental level, all digital images are just large arrays of numbers that can easily be manipulated by computer software. Specialized digital imaging software packages often use routines common to many different applications and fields of study. The freely available, platform independent, image-processing package ImageJ has many such functions. We highlight ImageJ's capabilities by presenting methods of processing sequences of images to produce a star trail image and a single high quality planetary image.

  20. Development of the SOFIA Image Processing Tool

    Science.gov (United States)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  1. High Throughput Multispectral Image Processing with Applications in Food Science

    Science.gov (United States)

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing’s outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples. PMID:26466349

  2. Image analysis in medical imaging: recent advances in selected examples

    Science.gov (United States)

    Dougherty, G

    2010-01-01

    Medical imaging has developed into one of the most important fields within scientific imaging due to the rapid and continuing progress in computerised medical image visualisation and advances in analysis methods and computer-aided diagnosis. Several research applications are selected to illustrate the advances in image analysis algorithms and visualisation. Recent results, including previously unpublished data, are presented to illustrate the challenges and ongoing developments. PMID:21611048

  3. Survey: interpolation methods in medical image processing.

    Science.gov (United States)

    Lehmann, T M; Gönner, C; Spitzer, K

    1999-11-01

    Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and windowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic; 5) cubic B-spline; 6) cubic; g) Lagrange; and 7) Gaussian interpolation and approximation techniques with kernel sizes from 1 x 1 up to 8 x 8. The comparison is done by: 1) spatial and Fourier analyses; 2) computational complexity as well as runtime evaluations; and 3) qualitative and quantitative interpolation error determinations for particular interpolation tasks which were taken from common situations in medical image processing. For local and Fourier analyses, a standardized notation is introduced and fundamental properties of interpolators are derived. Successful methods should be direct current (DC)-constant and interpolators rather than DC-inconstant or approximators. Each method's parameters are tuned with respect to those properties. This results in three novel kernels, which are introduced in this paper and proven to be within the best choices for medical image interpolation: the 6 x 6 Blackman-Harris windowed sinc interpolator, and the C2-continuous cubic kernels with N = 6 and N = 8 supporting points. For quantitative error evaluations, a set of 50 direct digital X rays was used. They have been selected arbitrarily from clinical routine. In general, large kernel sizes were found to be superior to small interpolation masks. Except for truncated sinc interpolators, all kernels with N = 6 or larger sizes perform significantly better than N = 2 or N = 3 point methods (p cubic 6 x 6 interpolator with continuous second derivatives, as defined in (24), can be recommended for most common interpolation tasks. It appears to be the fastest

  4. A New Image Processing and GIS Package

    Science.gov (United States)

    Rickman, D.; Luvall, J. C.; Cheng, T.

    1998-01-01

    The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

  5. Recent developments at JPL in the application of digital image processing techniques to astronomical images

    Science.gov (United States)

    Lorre, J. J.; Lynn, D. J.; Benton, W. D.

    1976-01-01

    Several techniques of a digital image-processing nature are illustrated which have proved useful in visual analysis of astronomical pictorial data. Processed digital scans of photographic plates of Stephans Quintet and NGC 4151 are used as examples to show how faint nebulosity is enhanced by high-pass filtering, how foreground stars are suppressed by linear interpolation, and how relative color differences between two images recorded on plates with different spectral sensitivities can be revealed by generating ratio images. Analyses are outlined which are intended to compensate partially for the blurring effects of the atmosphere on images of Stephans Quintet and to obtain more detailed information about Saturn's ring structure from low- and high-resolution scans of the planet and its ring system. The employment of a correlation picture to determine the tilt angle of an average spectral line in a low-quality spectrum is demonstrated for a section of the spectrum of Uranus.

  6. Asphalt Mixture Segregation Detection: Digital Image Processing Approach

    Directory of Open Access Journals (Sweden)

    Mohamadtaqi Baqersad

    2017-01-01

    Full Text Available Segregation determination in the asphalt pavement is an issue causing many disputes between agencies and contractors. The visual inspection method has commonly been used to determine pavement texture and in-place core density test used for verification. Furthermore, laser-based devices, such as the Florida Texture Meter (FTM and the Circular Track Meter (CTM, have recently been developed to evaluate the asphalt mixture texture. In this study, an innovative digital image processing approach is used to determine pavement segregation. In this procedure, the standard deviation of the grayscale image frequency histogram is used to determine segregated regions. Linear Discriminate Analysis (LDA is then implemented on the obtained standard deviations from image processing to classify pavements into the segregated and nonsegregated areas. The visual inspection method is utilized to verify this method. The results have demonstrated that this new method is a robust tool to determine segregated areas in newly paved FC9.5 pavement types.

  7. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  8. Digital signal and image processing using Matlab

    CERN Document Server

    Blanchet , Gérard

    2015-01-01

    The most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals, the theory being supported by exercises and computer simulations relating to real applications.   More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.  Following on from the first volume, this second installation takes a more practical stance, provi

  9. Digital signal and image processing using MATLAB

    CERN Document Server

    Blanchet , Gérard

    2014-01-01

    This fully revised and updated second edition presents the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications. More than 200 programs and functions are provided in the MATLABÒ language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject. This fully revised new edition updates : - the

  10. Medical Image Analysis by Cognitive Information Systems - a Review.

    Science.gov (United States)

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.

  11. Visualization of Parameter Space for Image Analysis

    Science.gov (United States)

    Pretorius, A. Johannes; Bray, Mark-Anthony P.; Carpenter, Anne E.; Ruddle, Roy A.

    2013-01-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step - initialization of sampling - and the last step - visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler - a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach. PMID:22034361

  12. Amateur Image Pipeline Processing using Python plus PyRAF

    Science.gov (United States)

    Green, Wayne

    2012-05-01

    A template pipeline spanning observing planning to publishing is offered as a basis for establishing a long term observing program. The data reduction pipeline encapsulates all policy and procedures, providing an accountable framework for data analysis and a teaching framework for IRAF. This paper introduces the technical details of a complete pipeline processing environment using Python, PyRAF and a few other languages. The pipeline encapsulates all processing decisions within an auditable framework. The framework quickly handles the heavy lifting of image processing. It also serves as an excellent teaching environment for astronomical data management and IRAF reduction decisions.

  13. Image analysis of self-organized multicellular patterns

    Directory of Open Access Journals (Sweden)

    Thies Christian

    2016-09-01

    Full Text Available Analysis of multicellular patterns is required to understand tissue organizational processes. By using a multi-scale object oriented image processing method, the spatial information of cells can be extracted automatically. Instead of manual segmentation or indirect measurements, such as general distribution of contrast or flow, the orientation and distribution of individual cells is extracted for quantitative analysis. Relevant objects are identified by feature queries and no low-level knowledge of image processing is required.

  14. Digital image processing of mandibular trabeculae on radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Ogino, Toshi

    1987-06-01

    The present study was aimed to reveal the texture patterns of the radiographs of the mandibular trabeculae by digital image processing. The 32 cases of normal subjects and the 13 cases of patients with mandibular diseases of ameloblastoma, primordial cysts, squamous cell carcinoma and odontoma were analyzed by their intra-oral radiographs in the right premolar regions. The radiograms were digitized by the use of a drum scanner densitometry method. The input radiographic images were processed by a histogram equalization method. The result are as follows : First, the histogram equalization method enhances the image contrast of the textures. Second, the output images of the textures for normal mandible-trabeculae radiograms are of network pattern in nature. Third, the output images for the patients are characterized by the non-network pattern and replaced by the patterns of the fabric texture, intertwined plants (karakusa-pattern), scattered small masses and amorphous texture. Thus, these results indicates that the present digital image system is expected to be useful for revealing the texture patterns of the radiographs and in the future for the texture analysis of the clinical radiographs to obtain quantitative diagnostic findings.

  15. Target rotation parameter estimation for ISAR imaging via frame processing

    Institute of Scientific and Technical Information of China (English)

    Xuezhi Wang; Yajing Huang; Weiping Yang; Bill Moran

    2016-01-01

    Frame processing method offers a model-based approach to Inverse Synthetic Aperture Radar (ISAR) imaging. It also provides a way to estimate the rotation rate of a non-cooperative target from radar returns via the frame operator properties. In this paper, the relationship between the best achievable ISAR image and the reconstructed image from radar returns was derived in the framework of Finite Frame Processing theory. We show that image defocusing caused by the use of an incorrect target rotation rate is interpreted under the FP method as a frame operator mismatch problem which causes energy dispersion. The unknown target rotation rate may be computed by optimizing the frame operator via a prominent point. Consequently, a prominent intensity maximization method in FP framework was proposed to estimate the underlying target rotation rate from radar returns. In addition, an image filtering technique was implemented to assist searching for a prominent point in practice. The proposed method is justified via a simulation analysis on the performance of FP imaging versus target rotation rate error. Effectiveness of the proposed method is also confirmed from real ISAR data experiments.

  16. Image processing to optimize wave energy converters

    Science.gov (United States)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  17. Image processing for safety assessment in civil engineering.

    Science.gov (United States)

    Ferrer, Belen; Pomares, Juan C; Irles, Ramon; Espinosa, Julian; Mas, David

    2013-06-20

    Behavior analysis of construction safety systems is of fundamental importance to avoid accidental injuries. Traditionally, measurements of dynamic actions in civil engineering have been done through accelerometers, but high-speed cameras and image processing techniques can play an important role in this area. Here, we propose using morphological image filtering and Hough transform on high-speed video sequence as tools for dynamic measurements on that field. The presented method is applied to obtain the trajectory and acceleration of a cylindrical ballast falling from a building and trapped by a thread net. Results show that safety recommendations given in construction codes can be potentially dangerous for workers.

  18. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    Science.gov (United States)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  19. Diagnosis of skin cancer using image processing

    Science.gov (United States)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel

    2014-10-01

    In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.

  20. Process Analysis Via Accuracy Control

    Science.gov (United States)

    1982-02-01

    0 1 4 3 NDARDS THE NATIONAL February 1982 Process Analysis Via Accuracy Control RESEARCH PROG RAM U.S. DEPARTMENT OF TRANSPORTATION Maritime...SUBTITLE Process Analysis Via Accuracy Control 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...examples are contained in Appendix C. Included, are examples of how “A/C” process - analysis leads to design improvement and how a change in sequence can

  1. Facial Image Analysis Based on Local Binary Patterns: A Survey

    NARCIS (Netherlands)

    Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.

    2011-01-01

    Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer

  2. Facial Image Analysis Based on Local Binary Patterns: A Survey

    NARCIS (Netherlands)

    Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.

    2011-01-01

    Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer

  3. Analysis of image acquisition, post-processing and documentation in adolescents with spine injuries. Comparison before and after referral to a university hospital; Bildgebung bei wirbelsaeulenverletzten Kindern und jungen Erwachsenen. Eine Analyse von Umfeld, Standards und Wiederholungsuntersuchungen bei Patientenverlegungen

    Energy Technology Data Exchange (ETDEWEB)

    Lemburg, S.P.; Roggenland, D.; Nicolas, V.; Heyer, C.M. [Berufsgenossenschaftliches Universitaetsklinikum Bergmannshell, Bochum (Germany). Inst. fuer Diagnostische Radiologie, Interventionelle Radiologie und Nuklearmedizin

    2012-09-15

    Purpose: Systematic evaluation of imaging situation and standards in acute spinal injuries of adolescents. Materials and Methods: Retrospective analysis of imaging studies of transferred adolescents with spinal injuries and survey of transferring hospitals (TH) with respect to the availability of modalities and radiological expertise and post-processing and documentation of CT studies were performed. Repetitions of imaging studies and cumulative effective dose (CED) were noted. Results: 33 of 43 patients (77 %) treated in our hospital (MA 17.2 years, 52 % male) and 25 of 32 TH (78 %) were evaluated. 24-hr availability of conventional radiography and CT was present in 96 % and 92 % of TH, whereas MRI was available in only 36 %. In 64 % of TH, imaging expertise was guaranteed by an on-staff radiologist. During off-hours radiological service was provided on an on-call basis in 56 % of TH. Neuroradiologic and pediatric radiology expertise was not available in 44 % and 60 % of TH, respectively. CT imaging including post-processing and documentation matched our standards in 36 % and 32 % of cases. The repetition rate of CT studies was 39 % (CED 116.08 mSv). Conclusion: With frequent CT repetitions, two-thirds of re-examined patients revealed a different clinical estimation of trauma severity and insufficient CT quality as possible causes for re-examination. A standardization of initial clinical evaluation and CT imaging could possibly reduce the need for repeat examinations. (orig.)

  4. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    Science.gov (United States)

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA

  5. Simultaneous multi-parametric analysis of Leishmania and of its hosting mammal cells: A high content imaging-based method enabling sound drug discovery process.

    Science.gov (United States)

    Forestier, Claire-Lise; Späth, Gerald Frank; Prina, Eric; Dasari, Sreekanth

    2015-11-01

    Leishmaniasis is a vector-borne disease for which only limited therapeutic options are available. The disease is ranked among the six most important tropical infectious diseases and represents the second-largest parasitic killer in the world. The development of new therapies has been hampered by the lack of technologies and methodologies that can be integrated into the complex physiological environment of a cell or organism and adapted to suitable in vitro and in vivo Leishmania models. Recent advances in microscopy imaging offer the possibility to assess the efficacy of potential drug candidates against Leishmania within host cells. This technology allows the simultaneous visualization of relevant phenotypes in parasite and host cells and the quantification of a variety of cellular events. In this review, we present the powerful cellular imaging methodologies that have been developed for drug screening in a biologically relevant context, addressing both high-content and high-throughput needs. Furthermore, we discuss the potential of intra-vital microscopy imaging in the context of the anti-leishmanial drug discovery process.

  6. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  7. The Accuratre Signal Model and Imaging Processing in Geosynchronous SAR

    Science.gov (United States)

    Hu, Cheng

    With the development of synthetic aperture radar (SAR) application, the disadvantage of low earth orbit (LEO) SAR becomes more and more apparent. The increase of orbit altitude can shorten the revisit time and enlarge the coverage area in single look, and then satisfy the application requirement. The concept of geosynchronous earth orbit (GEO) SAR system is firstly presented and deeply discussed by K.Tomiyasi and other researchers. A GEO SAR, with its fine temporal resolution, would overcome the limitations of current imaging systems, allowing dense interpretation of transient phenomena as GPS time-series analysis with a spatial density several orders of magnitude finer. Until now, the related literatures about GEO SAR are mainly focused in the system parameter design and application requirement. As for the signal characteristic, resolution calculation and imaging algorithms, it is nearly blank in the related literatures of GEO SAR. In the LEO SAR, the signal model analysis adopts the `Stop-and-Go' assumption in general, and this assumption can satisfy the imaging requirement in present advanced SAR system, such as TerraSAR, Radarsat2 and so on. However because of long propagation distance and non-negligible earth rotation, the `Stop-and-Go' assumption does not exist and will cause large propagation distance error, and then affect the image formation. Furthermore the long propagation distance will result in the long synthetic aperture time such as hundreds of seconds, therefore the linear trajectory model in LEO SAR imaging will fail in GEO imaging, and the new imaging model needs to be proposed for the GEO SAR imaging processing. In this paper, considering the relative motion between satellite and earth during signal propagation time, the accurate analysis method for propagation slant range is firstly presented. Furthermore, the difference between accurate analysis method and `Stop-and-Go' assumption is analytically obtained. Meanwhile based on the derived

  8. Research on Defects Detection by Image Processing of Thermographic Images

    Directory of Open Access Journals (Sweden)

    Shrestha Ranjit

    2015-10-01

    Full Text Available This paper presents the results of experimental investigation of thermal phenomena in a square shape (180 mm *180 mm STS 304 specimen with 10 mm thickness and artificial defects with circular cut-outs of varying depth and diameter at the back side. The material is aimed to be tested by means of thermal wave thermography. Lock-in thermography is employed for the detection of defects. The temperature field of the front surface of material tested is observed and analysed. The four point correlation algorithms are applied to extract phase angle of thermal wave’s harmonic component. Phase image are analyzed to find the qualitative information about the defects. Phase contrast method was used for better identification and analysis of the existing defects of the specimen.

  9. Using the medical image processing package, ImageJ, for astronomy

    OpenAIRE

    2006-01-01

    At the most fundamental level, all digital images are just large arrays of numbers that can easily be manipulated by computer software. Specialized digital imaging software packages often use routines common to many different applications and fields of study. The freely available, platform independent, image-processing package ImageJ has many such functions. We highlight ImageJ's capabilities by presenting methods of processing sequences of images to produce a star trail image and a single hi...

  10. Tracker: Image-Processing and Object-Tracking System Developed

    Science.gov (United States)

    Klimek, Robert B.; Wright, Theodore W.

    1999-01-01

    Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in

  11. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment

    Directory of Open Access Journals (Sweden)

    Meng Kuan eLin

    2013-07-01

    Full Text Available Digital Imaging Processing (DIP requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and digital imaging processing service, called M-DIP. The objective of the system is to (1 automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC, Neuroimaging Informatics Technology Initiative (NIFTI to RAW formats; (2 speed up querying of imaging measurement; and (3 display high level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle- layer database, a stand-alone DIP server and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data a multiple zoom levels and to increase its quality to meet users expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  12. Basic research planning in mathematical pattern recognition and image analysis

    Science.gov (United States)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  13. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing

    2008-01-01

    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  14. Seam tracking with texture based image processing for laser materials processing

    Science.gov (United States)

    Krämer, S.; Fiedler, W.; Drenker, A.; Abels, P.

    2014-02-01

    This presentation deals with a camera based seam tracking system for laser materials processing. The digital high speed camera records interaction point and illuminated work piece surface. The camera system is coaxially integrated into the laser beam path. The aim is to observe interaction point and joint gap in one image for a closed loop control of the welding process. Especially for the joint gap observation a new image processing method is used. Basic idea is to detect a difference between the textures of the surface of the two work pieces to be welded together instead of looking for a nearly invisible narrow line imaged by the joint gap. The texture based analysis of the work piece surface is more robust and less affected by varying illumination conditions than conventional grey scale image processing. This technique of image processing gives in some cases the opportunity for real zero gap seam tracking. In a condensed view economic benefits are simultaneous laser and seam tracking for self-calibrating laser welding applications without special seam pre preparation for seam tracking.

  15. Quantitative image analysis of celiac disease.

    Science.gov (United States)

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-03-07

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients.

  16. Quantitative image analysis of celiac disease

    Science.gov (United States)

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-01-01

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients. PMID:25759524

  17. Dynamic analysis of process reactors

    Energy Technology Data Exchange (ETDEWEB)

    Shadle, L.J.; Lawson, L.O.; Noel, S.D.

    1995-06-01

    The approach and methodology of conducting a dynamic analysis is presented in this poster session in order to describe how this type of analysis can be used to evaluate the operation and control of process reactors. Dynamic analysis of the PyGas{trademark} gasification process is used to illustrate the utility of this approach. PyGas{trademark} is the gasifier being developed for the Gasification Product Improvement Facility (GPIF) by Jacobs-Siffine Engineering and Riley Stoker. In the first step of the analysis, process models are used to calculate the steady-state conditions and associated sensitivities for the process. For the PyGas{trademark} gasifier, the process models are non-linear mechanistic models of the jetting fluidized-bed pyrolyzer and the fixed-bed gasifier. These process sensitivities are key input, in the form of gain parameters or transfer functions, to the dynamic engineering models.

  18. Improving Seismic Image with Advanced Processing Techniques

    Directory of Open Access Journals (Sweden)

    Mericy Lastra Cunill

    2012-07-01

    Full Text Available Taking Taking into account the need to improve the seismic image in the central area of Cuba, specifically in the area of the Venegas sector, located in the Cuban Folded Belt, the seismic data acquired by Cuba Petróleo (CUPET in the year 2007 was reprocessed according to the experience accumulated during the previous processing carried out in the same year, and the new geologic knowledge on the area. This was done with the objective of improving the results. The processing applied previously was analyzed by reprocessing the primary data with new focuses and procedures, among them are the following: the attenuation of the superficial wave with a filter in the Radon domain in its lineal variant, the change of the primary statics corrections of elevation by those of refraction, the study of velocity with the selection automatic biespectral of high density, the study of the anisotropy, the attenuation of the random noise, and the pre stack time and depth migration. As a result of this reprocessing, a structure that was not identified in the seismic sections of the previous processing was located at the top of a Continental Margin sediment located to the north of the sector that increased the potentialities of finding hydrocarbons in quantities of economic importance thus diminishing the risk of drilling in the sector Venegas.

  19. Effects of image processing on the detective quantum efficiency

    Science.gov (United States)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  20. Influence of Digital Camera Errors on the Photogrammetric Image Processing

    Science.gov (United States)

    Sužiedelytė-Visockienė, Jūratė; Bručas, Domantas

    2009-01-01

    The paper deals with the calibration of digital camera Canon EOS 350D, often used for the photogrammetric 3D digitalisation and measurements of industrial and construction site objects. During the calibration data on the optical and electronic parameters, influencing the distortion of images, such as correction of the principal point, focal length of the objective, radial symmetrical and non-symmetrical distortions were obtained. The calibration was performed by means of the Tcc software implementing the polynomial of Chebichev and using a special test-field with the marks, coordinates of which are precisely known. The main task of the research - to determine how parameters of the camera calibration influence the processing of images, i. e. the creation of geometric model, the results of triangulation calculations and stereo-digitalisation. Two photogrammetric projects were created for this task. In first project the non-corrected and in the second the corrected ones, considering the optical errors of the camera obtained during the calibration, images were used. The results of analysis of the images processing is shown in the images and tables. The conclusions are given.