WorldWideScience

Sample records for rapid image processing

  1. Visual processing in rapid-chase systems: Image processing, attention, and awareness

    Directory of Open Access Journals (Sweden)

    Thomas eSchmidt

    2011-07-01

    Full Text Available Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed towards target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. 1 When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria. 2 Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. 3 Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. 4 When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that "fast" visuomotor measures predominantly driven by feedforward processing should supplement "slow" psychophysical measures predominantly based on visual

  2. Rapid visuomotor processing of phobic images in spider- and snake-fearful participants.

    Science.gov (United States)

    Haberkamp, Anke; Schmidt, Filipp; Schmidt, Thomas

    2013-10-01

    This study investigates enhanced visuomotor processing of phobic compared to fear-relevant and neutral stimuli. We used a response priming design to measure rapid, automatic motor activation by natural images (spiders, snakes, mushrooms, and flowers) in spider-fearful, snake-fearful, and control participants. We found strong priming effects in all tasks and conditions; however, results showed marked differences between groups. Most importantly, in the group of spider-fearful individuals, spider pictures had a strong and specific influence on even the fastest motor responses: Phobic primes entailed the largest priming effects, and phobic targets accelerated responses, both effects indicating speeded response activation by phobic images. In snake-fearful participants, this processing enhancement for phobic material was less pronounced and extended to both snake and spider images. We conclude that spider phobia leads to enhanced processing capacity for phobic images. We argue that this is enabled by long-term perceptual learning processes. © 2013.

  3. Rapid implementation of image processing onto FPGA using modular DSP C6201 VHDL model

    Science.gov (United States)

    Brost, V.; Yang, F.; Paindavoine, M.; Liu, X. J.

    2007-01-01

    Recent FPGA chips, with their large capacity memory and reconfigurability potential, have opened new frontiers for rapid prototyping of embedded systems. With the advent of high density FPGAs it is now feasible to implement a high-performance VLIW processor core in an FPGA. We describe research results of enabling the DSP TMS320 C6201 model for real-time image processing applications, by exploiting FPGA technology. The goals are, firstly, to keep the flexibility of DSP in order to shorten the development cycle, and secondly, to use powerful available resources on FPGA to a maximum in order to increase real-time performance. We present a modular DSP C6201 VHDL model which contains only the bare minimum number of instruction sets, or modules, necessary for each target application. This allows an optimal implementation on the FPGA. Some common algorithms of image processing were created and validated on an FPGA VirtexII-2000 multimedia board using the proposed application development cycle. Our results demonstrate that an algorithm can easily be, in an optimal manner, specified and then automatically converted to VHDL language and implemented on an FPGA device with system level software.

  4. Automated Formosat Image Processing System for Rapid Response to International Disasters

    Science.gov (United States)

    Cheng, M. C.; Chou, S. C.; Chen, Y. C.; Chen, B.; Liu, C.; Yu, S. J.

    2016-06-01

    FORMOSAT-2, Taiwan's first remote sensing satellite, was successfully launched in May of 2004 into the Sun-synchronous orbit at 891 kilometers of altitude. With the daily revisit feature, the 2-m panchromatic, 8-m multi-spectral resolution images captured have been used for researches and operations in various societal benefit areas. This paper details the orchestration of various tasks conducted in different institutions in Taiwan in the efforts responding to international disasters. The institutes involved including its space agency-National Space Organization (NSPO), Center for Satellite Remote Sensing Research of National Central University, GIS Center of Feng-Chia University, and the National Center for High-performance Computing. Since each institution has its own mandate, the coordinated tasks ranged from receiving emergency observation requests, scheduling and tasking of satellite operation, downlink to ground stations, images processing including data injection, ortho-rectification, to delivery of image products. With the lessons learned from working with international partners, the FORMOSAT Image Processing System has been extensively automated and streamlined with a goal to shorten the time between request and delivery in an efficient manner. The integrated team has developed an Application Interface to its system platform that provides functions of search in archive catalogue, request of data services, mission planning, inquiry of services status, and image download. This automated system enables timely image acquisition and substantially increases the value of data product. Example outcome of these efforts in recent response to support Sentinel Asia in Nepal Earthquake is demonstrated herein.

  5. AUTOMATED FORMOSAT IMAGE PROCESSING SYSTEM FOR RAPID RESPONSE TO INTERNATIONAL DISASTERS

    Directory of Open Access Journals (Sweden)

    M. C. Cheng

    2016-06-01

    Full Text Available FORMOSAT-2, Taiwan’s first remote sensing satellite, was successfully launched in May of 2004 into the Sun-synchronous orbit at 891 kilometers of altitude. With the daily revisit feature, the 2-m panchromatic, 8-m multi-spectral resolution images captured have been used for researches and operations in various societal benefit areas. This paper details the orchestration of various tasks conducted in different institutions in Taiwan in the efforts responding to international disasters. The institutes involved including its space agency-National Space Organization (NSPO, Center for Satellite Remote Sensing Research of National Central University, GIS Center of Feng-Chia University, and the National Center for High-performance Computing. Since each institution has its own mandate, the coordinated tasks ranged from receiving emergency observation requests, scheduling and tasking of satellite operation, downlink to ground stations, images processing including data injection, ortho-rectification, to delivery of image products. With the lessons learned from working with international partners, the FORMOSAT Image Processing System has been extensively automated and streamlined with a goal to shorten the time between request and delivery in an efficient manner. The integrated team has developed an Application Interface to its system platform that provides functions of search in archive catalogue, request of data services, mission planning, inquiry of services status, and image download. This automated system enables timely image acquisition and substantially increases the value of data product. Example outcome of these efforts in recent response to support Sentinel Asia in Nepal Earthquake is demonstrated herein.

  6. Image processing

    NARCIS (Netherlands)

    van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan; Blanken, Henk; Vries de, A.P.; Blok, H.E.; Feng, L; Feng, L.

    2007-01-01

    The field of image processing addresses handling and analysis of images for many purposes using a large number of techniques and methods. The applications of image processing range from enhancement of the visibility of cer- tain organs in medical images to object recognition for handling by

  7. Process for rapid detection of fratricidal defects on optics using Linescan Phase Differential Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Ravizza, F L; Nostrand, M C; Kegelmeyer, L M; Hawley, R A; Johnson, M A

    2009-11-05

    Phase-defects on optics used in high-power lasers can cause light intensification leading to laser-induced damage of downstream optics. We introduce Linescan Phase Differential Imaging (LPDI), a large-area dark-field imaging technique able to identify phase-defects in the bulk or surface of large-aperture optics with a 67 second scan-time. Potential phase-defects in the LPDI images are indentified by an image analysis code and measured with a Phase Shifting Diffraction Interferometer (PSDI). The PSDI data is used to calculate the defects potential for downstream damage using an empirical laser-damage model that incorporates a laser propagation code. A ray tracing model of LPDI was developed to enhance our understanding of its phase-defect detection mechanism and reveal limitations.

  8. Rapid FLIM: The new and innovative method for ultra-fast imaging of biological processes (Conference Presentation)

    Science.gov (United States)

    Orthaus-Mueller, Sandra; Kraemer, Benedikt; Tannert, Astrid; Roehlicke, Tino; Wahl, Michael; Rahn, Hans-Juergen; Koberling, Felix; Erdmann, Rainer

    2017-02-01

    Over the last two decades, time-resolved fluorescence microscopy has become an essential tool in Life Sciences thanks to measurement procedures such as Fluorescence Lifetime Imaging (FLIM), lifetime based Foerster Resonance Energy Transfer (FRET), and Fluorescence (Lifetime) Correlation Spectroscopy (F(L)CS) down to the single molecule level. Today, complete turn-key systems are available either as stand-alone units or as upgrades for confocal laser scanning microscopes (CLSM). Data acquisition on such systems is typically based on Time-Correlated Single Photon Counting (TCSPC) electronics along with picosecond pulsed diode lasers as excitation sources and highly sensitive, single photon counting detectors. Up to now, TCSPC data acquisition is considered a somewhat slow process as a large number of photons per pixel is required for reliable data analysis, making it difficult to use FLIM for following fast FRET processes, such as signal transduction pathways in cells or fast moving sub-cellular structures. We present here a novel and elegant solution to tackle this challenge. Our approach, named rapidFLIM, exploits recent hardware developments such as TCSPC modules with ultra short dead times and hybrid photomultiplier detector assemblies enabling significantly higher detection count rates. Thanks to these improved components, it is possible to achieve much better photon statistics in significantly shorter time spans while being able to perform FLIM imaging for fast processes in a qualitative manner and with high optical resolution. FLIM imaging can now be performed with up to several frames per second making it possible to study fast processes such as protein interactions involved in endosome trafficking.

  9. Rapid thermal processing of semiconductors

    CERN Document Server

    Borisenko, Victor E

    1997-01-01

    Rapid thermal processing has contributed to the development of single wafer cluster processing tools and other innovations in integrated circuit manufacturing environments Borisenko and Hesketh review theoretical and experimental progress in the field, discussing a wide range of materials, processes, and conditions They thoroughly cover the work of international investigators in the field

  10. Rapid 'on-line' image processing as a tool in the evaluation of kinetic and morphological aspects of receptor-induced cell activation.

    Science.gov (United States)

    Theler, J M; Wollheim, C B; Schlegel, W

    1991-01-01

    Transmembrane signalling involves rapid and spatially well defined changes in cytosolic free Ca2+, [Ca2+]i. Specific technologies involving image processing permit the analysis of kinetic and morphological aspects of [Ca2+]i at the subcellular level with the fluorescent Ca2+ probe fura-2. Fluorescence excitation wavelengths (340 nm or 380 nm) are alternated in synchrony with the acquisition at video rate of images captured with an intensified CCD camera. Images are digitized, recursively filtered, divided, and displayed after calibration of the 'ratio' image into a numerical [Ca2+]i scale. The image processor IMAGINE (Synoptics Ltd., UK) permits these operations at video rate. This produces 'on-line' [Ca2+]i images in real time which are stored on video tapes for subsequent analysis. The present communication summarizes the rationale for the selection of our current technologies. A comparison with alternative solutions should highlight the particular advantages and drawbacks of our approach. The present text thus should serve as a help for investigators who try to assemble image processing tools for work in the receptor and cellular signalling field.

  11. Furnace for rapid thermal processing

    NARCIS (Netherlands)

    Roozeboom, F.; Duine, P.A.; Sluis, P. van der

    2001-01-01

    A Method (1) for Rapid Thermal Processing of a wafer (7), wherein the wafer (7) is heated by lamps (9), and the heat radiation is reflected by an optical switching device (15,17) which is in the reflecting state during the heating stage. During the cooling stage of the wafer (7), the heat is

  12. AVSynDEx: A Rapid Prototyping Process Dedicated to the Implementation of Digital Image Processing Applications on Multi-DSP and FPGA Architectures

    Directory of Open Access Journals (Sweden)

    Virginie Fresse

    2002-09-01

    Full Text Available We present AVSynDEx (concatenation of AVS + SynDEx, a rapid prototyping process aiming to the implementation of digital signal processing applications on mixed architectures (multi-DSP + FPGA. This process is based on the use of widely available and efficient CAD tools established along the design process so that most of the implementation tasks become automatic. These tools and architectures are judiciously selected and integrated during the implementation process to help a signal processing specialist without relevant hardware experience. We have automated the translation between the different levels of the process to increase and secure it. One main advantage is that only a signal processing designer is needed, all the other specialized manual tasks being transparent in this prototyping methodology, hereby reducing the implementation time.

  13. Image Processing Research

    Science.gov (United States)

    1975-09-30

    Picture Processing," USCEE Report No. 530, 1974, pp. 11-19. 4.7 Spectral Sensitivity Estimation of a Color Image Scanner Clanton E. Mancill and William...Projects: the improvement of image fidelity and presentation format; (3) Image Data Extraction Projects: the recognition of objects within pictures ...representation; (5) Image Proc- essing Systems Projects: the development of image processing hardware and software support systems. 14. Key words : Image

  14. A Real Time Quality Monitoring System for the Lighting Industry: A Practical and Rapid Approach Using Computer Vision and Image Processing (CVIP Tools

    Directory of Open Access Journals (Sweden)

    C.K. Ng

    2011-11-01

    Full Text Available In China, the manufacturing of lighting products is very labour intensive. The approach used to check quality and control production relies on operators who test using various types of fixtures. In order to increase the competitiveness of the manufacturer and the efficiency of production, the authors propose an integrated system. This system has two major elements: a computer vision system (CVS and a real‐time monitoring system (RTMS. This model focuses not only on the rapid and practical application of modern technology to a traditional industry, but also represents a process innovation in the lighting industry. This paper describes the design and development of the prototyped lighting inspection system based on a practical and fast approach using computer vision and imaging processing (CVIP tools. LabVIEW with IMAQ Vision Builder is the chosen tool for building the CVS. Experimental results show that this system produces a lower error rate than humans produce in the quality checking process. The whole integrated manufacturing strategy, aimed at achieving a better performance, is most suitable for a China and other labour intensive environments such as India.

  15. Ceramic microfabrication by rapid prototyping process chains

    Indian Academy of Sciences (India)

    To avoid high tooling costs in product development, a rapid prototyping process chain has been established that enables rapid manufacturing of ceramic microcomponents from functional models to small lot series within a short time. This process chain combines the fast and inexpensive supply of master models by rapid ...

  16. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  17. Rapid gas hydrate formation process

    Science.gov (United States)

    Brown, Thomas D.; Taylor, Charles E.; Unione, Alfred J.

    2013-01-15

    The disclosure provides a method and apparatus for forming gas hydrates from a two-phase mixture of water and a hydrate forming gas. The two-phase mixture is created in a mixing zone which may be wholly included within the body of a spray nozzle. The two-phase mixture is subsequently sprayed into a reaction zone, where the reaction zone is under pressure and temperature conditions suitable for formation of the gas hydrate. The reaction zone pressure is less than the mixing zone pressure so that expansion of the hydrate-forming gas in the mixture provides a degree of cooling by the Joule-Thompson effect and provides more intimate mixing between the water and the hydrate-forming gas. The result of the process is the formation of gas hydrates continuously and with a greatly reduced induction time. An apparatus for conduct of the method is further provided.

  18. Medical image processing

    CERN Document Server

    Dougherty, Geoff

    2011-01-01

    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  19. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin

    2011-01-01

    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  20. The image processing handbook

    CERN Document Server

    Russ, John C

    2006-01-01

    Now in its fifth edition, John C. Russ's monumental image processing reference is an even more complete, modern, and hands-on tool than ever before. The Image Processing Handbook, Fifth Edition is fully updated and expanded to reflect the latest developments in the field. Written by an expert with unequalled experience and authority, it offers clear guidance on how to create, select, and use the most appropriate algorithms for a specific application. What's new in the Fifth Edition? ·       A new chapter on the human visual process that explains which visual cues elicit a response from the vie

  1. Image processing occupancy sensor

    Science.gov (United States)

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  2. Onboard image processing

    Science.gov (United States)

    Martin, D. R.; Samulon, A. S.

    1979-01-01

    The possibility of onboard geometric correction of Thematic Mapper type imagery to make possible image registration is considered. Typically, image registration is performed by processing raw image data on the ground. The geometric distortion (e.g., due to variation in spacecraft location and viewing angle) is estimated by using a Kalman filter updated by correlating the received data with a small reference subimage, which has known location. Onboard image processing dictates minimizing the complexity of the distortion estimation while offering the advantages of a real time environment. In keeping with this, the distortion estimation can be replaced by information obtained from the Global Positioning System and from advanced star trackers. Although not as accurate as the conventional ground control point technique, this approach is capable of achieving subpixel registration. Appropriate attitude commands can be used in conjunction with image processing to achieve exact overlap of image frames. The magnitude of the various distortion contributions, the accuracy with which they can be measured in real time, and approaches to onboard correction are investigated.

  3. Robots and image processing

    Science.gov (United States)

    Peterson, C. E.

    1982-03-01

    Developments in integrated circuit manufacture are discussed, with attention given to the current expectations of industrial automation. It is shown that the growing emphasis on image processing is a natural consequence of production requirements which have generated a small but significant range of vision applications. The state of the art in image processing is discussed, with the main research areas delineated. The main areas of application will be less in welding and diecasting than in assembly and machine tool loading, with vision becoming an ever more important facet of the installation. The two main approaches to processing images in a computer (depending on the aims of the project) are discussed. The first involves producing a system that does a specific task, the second is to achieve an understanding of some basic issues in object recognition.

  4. Geology And Image Processing

    Science.gov (United States)

    Daily, Mike

    1982-07-01

    The design of digital image processing systems for geological applications will be driven by the nature and complexity of the intended use, by the types and quantities of data, and by systems considerations. Image processing will be integrated with geographic information systems (GIS) and data base management systems (DBMS). Dense multiband data sets from radar and multispectral scanners (MSS) will tax memory, bus, and processor architectures. Array processors and dedicated-function chips (VLSI/VHSIC) will allow the routine use of FFT and classification algorithms. As this geoprocessing capability becomes available to a larger segment of the geological community, user friendliness and smooth interaction will become a major concern.

  5. Ceramic microfabrication by rapid prototyping process chains

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    fast and inexpensive supply for polymer master models and a ceramic shaping method that enables the replication of the RP model into multiple ceramic materials within a short time. (Knitter et al 1999). 2. Rapid prototyping process chains. The manufacturing of ceramic microparts presented here set out with the 3D-CAD ...

  6. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo

    2016-01-01

    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  7. Rapid thermal processing science and technology

    CERN Document Server

    Fair, Richard B

    1993-01-01

    This is the first definitive book on rapid thermal processing (RTP), an essential namufacturing technology for single-wafer processing in highly controlled environments. Written and edited by nine experts in the field, this book covers a range of topics for academics and engineers alike, moving from basic theory to advanced technology for wafer manufacturing. The book also provides new information on the suitability or RTP for thin film deposition, junction formation, silicides, epitaxy, and in situ processing. Complete discussions on equipment designs and comparisons between RTP and other

  8. Introduction to computer image processing

    Science.gov (United States)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  9. Application of two segmentation protocols during the processing of virtual images in rapid prototyping: ex vivo study with human dry mandibles.

    Science.gov (United States)

    Ferraz, Eduardo Gomes; Andrade, Lucio Costa Safira; dos Santos, Aline Rode; Torregrossa, Vinicius Rabelo; Rubira-Bullen, Izabel Regina Fischer; Sarmento, Viviane Almeida

    2013-12-01

    The aim of this study was to evaluate the accuracy of virtual three-dimensional (3D) reconstructions of human dry mandibles, produced from two segmentation protocols ("outline only" and "all-boundary lines"). Twenty virtual three-dimensional (3D) images were built from computed tomography exam (CT) of 10 dry mandibles, in which linear measurements between anatomical landmarks were obtained and compared to an error probability of 5 %. The results showed no statistically significant difference among the dry mandibles and the virtual 3D reconstructions produced from segmentation protocols tested (p = 0,24). During the designing of a virtual 3D reconstruction, both "outline only" and "all-boundary lines" segmentation protocols can be used. Virtual processing of CT images is the most complex stage during the manufacture of the biomodel. Establishing a better protocol during this phase allows the construction of a biomodel with characteristics that are closer to the original anatomical structures. This is essential to ensure a correct preoperative planning and a suitable treatment.

  10. Introduction to digital image processing

    CERN Document Server

    Pratt, William K

    2013-01-01

    CONTINUOUS IMAGE CHARACTERIZATION Continuous Image Mathematical Characterization Image RepresentationTwo-Dimensional SystemsTwo-Dimensional Fourier TransformImage Stochastic CharacterizationPsychophysical Vision Properties Light PerceptionEye PhysiologyVisual PhenomenaMonochrome Vision ModelColor Vision ModelPhotometry and ColorimetryPhotometryColor MatchingColorimetry ConceptsColor SpacesDIGITAL IMAGE CHARACTERIZATION Image Sampling and Reconstruction Image Sampling and Reconstruction ConceptsMonochrome Image Sampling SystemsMonochrome Image Reconstruction SystemsColor Image Sampling SystemsImage QuantizationScalar QuantizationProcessing Quantized VariablesMonochrome and Color Image QuantizationDISCRETE TWO-DIMENSIONAL LINEAR PROCESSING Discrete Image Mathematical Characterization Vector-Space Image RepresentationGeneralized Two-Dimensional Linear OperatorImage Statistical CharacterizationImage Probability Density ModelsLinear Operator Statistical RepresentationSuperposition and ConvolutionFinite-Area Superp...

  11. A Review on Image Processing

    OpenAIRE

    Amandeep Kour; Vimal Kishore Yadav; Vikas Maheshwari; Deepak Prashar

    2013-01-01

    Image Processing includes changing the nature of an image in order to improve its pictorial information for human interpretation, for autonomous machine perception. Digital image processing is a subset of the electronic domain wherein the image is converted to an array of small integers, called pixels, representing a physical quantity such as scene radiance, stored in a digital memory, and processed by computer or other digital hardware. Interest in digital image processing methods stems from...

  12. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  13. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  14. Image Processing Diagnostics: Emphysema

    Science.gov (United States)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  15. Smart Image Enhancement Process

    Science.gov (United States)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  16. Image processing and recognition for biological images.

    Science.gov (United States)

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  17. Pediatric imaging. Rapid fire questions and answers

    Energy Technology Data Exchange (ETDEWEB)

    Quattromani, F.; Lampe, R. (eds.) [Texas Tech Univ. Health Sciences Center, School of Medicine, Lubbock, TX (United States); Handal, G.A. [Texas Tech Univ. Health Sciences Center, School of Medicine, El Paso, TX (United States)

    2008-07-01

    The book contains the following contributions: Airway, head, neck; allergy, immunology rheumatology; pediatric cardiac imaging; child abuse; chromosomal abnormalities; conscious sedation; contrast agents and radiation protection; pediatric gastrointestinal imaging; genetic disorders in infants and children; pediatric genitourinary imaging; pediatric hematology, oncology imaging; pediatric intenrventional radiology; metabolic and vitamin disorders; muscoskeletal disorders (osteoradiology); neonatology imaging; pediatric neuroimaging; imaging of the respiratory tract in infants and children; vascular anomalies.

  18. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier

    2013-01-01

    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  19. Rapid thermal processing and beyond applications in semiconductor processing

    CERN Document Server

    Lerch, W

    2008-01-01

    Heat-treatment and thermal annealing are very common processing steps which have been employed during semiconductor manufacturing right from the beginning of integrated circuit technology. In order to minimize undesired diffusion, and other thermal budget-dependent effects, the trend has been to reduce the annealing time sharply by switching from standard furnace batch-processing (involving several hours or even days), to rapid thermal processing involving soaking times of just a few seconds. This transition from thermal equilibrium, to highly non-equilibrium, processing was very challenging a

  20. Digital radiography image quality: image processing and display.

    Science.gov (United States)

    Krupinski, Elizabeth A; Williams, Mark B; Andriole, Katherine; Strauss, Keith J; Applegate, Kimberly; Wyatt, Margaret; Bjork, Sandra; Seibert, J Anthony

    2007-06-01

    This article on digital radiography image processing and display is the second of two articles written as part of an intersociety effort to establish image quality standards for digital and computed radiography. The topic of the other paper is digital radiography image acquisition. The articles were developed collaboratively by the ACR, the American Association of Physicists in Medicine, and the Society for Imaging Informatics in Medicine. Increasingly, medical imaging and patient information are being managed using digital data during acquisition, transmission, storage, display, interpretation, and consultation. The management of data during each of these operations may have an impact on the quality of patient care. These articles describe what is known to improve image quality for digital and computed radiography and to make recommendations on optimal acquisition, processing, and display. The practice of digital radiography is a rapidly evolving technology that will require timely revision of any guidelines and standards.

  1. Data Science Innovations That Streamline Development, Documentation, Reproducibility, and Dissemination of Models in Computational Thermodynamics: An Application of Image Processing Techniques for Rapid Computation, Parameterization and Modeling of Phase Diagrams

    Science.gov (United States)

    Ghiorso, M. S.

    2014-12-01

    Computational thermodynamics (CT) represents a collection of numerical techniques that are used to calculate quantitative results from thermodynamic theory. In the Earth sciences, CT is most often applied to estimate the equilibrium properties of solutions, to calculate phase equilibria from models of the thermodynamic properties of materials, and to approximate irreversible reaction pathways by modeling these as a series of local equilibrium steps. The thermodynamic models that underlie CT calculations relate the energy of a phase to temperature, pressure and composition. These relationships are not intuitive and they are seldom well constrained by experimental data; often, intuition must be applied to generate a robust model that satisfies the expectations of use. As a consequence of this situation, the models and databases the support CT applications in geochemistry and petrology are tedious to maintain as new data and observations arise. What is required to make the process more streamlined and responsive is a computational framework that permits the rapid generation of observable outcomes from the underlying data/model collections, and importantly, the ability to update and re-parameterize the constitutive models through direct manipulation of those outcomes. CT procedures that take models/data to the experiential reference frame of phase equilibria involve function minimization, gradient evaluation, the calculation of implicit lines, curves and surfaces, contour extraction, and other related geometrical measures. All these procedures are the mainstay of image processing analysis. Since the commercial escalation of video game technology, open source image processing libraries have emerged (e.g., VTK) that permit real time manipulation and analysis of images. These tools find immediate application to CT calculations of phase equilibria by permitting rapid calculation and real time feedback between model outcome and the underlying model parameters.

  2. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R

    1996-01-01

    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  3. Challenges in 3DTV image processing

    Science.gov (United States)

    Redert, André; Berretty, Robert-Paul; Varekamp, Chris; van Geest, Bart; Bruijns, Jan; Braspenning, Ralph; Wei, Qingqing

    2007-01-01

    Philips provides autostereoscopic three-dimensional display systems that will bring the next leap in visual experience, adding true depth to video systems. We identified three challenges specifically for 3D image processing: 1) bandwidth and complexity of 3D images, 2) conversion of 2D to 3D content, and 3) object-based image/depth processing. We discuss these challenges and our solutions via several examples. In conclusion, the solutions have enabled the market introduction of several professional 3D products, and progress is made rapidly towards consumer 3DTV.

  4. Eye Redness Image Processing Techniques

    Science.gov (United States)

    Adnan, M. R. H. Mohd; Zain, Azlan Mohd; Haron, Habibollah; Alwee, Razana; Zulfaezal Che Azemin, Mohd; Osman Ibrahim, Ashraf

    2017-09-01

    The use of photographs for the assessment of ocular conditions has been suggested to further standardize clinical procedures. The selection of the photographs to be used as scale reference images was subjective. Numerous methods have been proposed to assign eye redness scores by computational methods. Image analysis techniques have been investigated over the last 20 years in an attempt to forgo subjective grading scales. Image segmentation is one of the most important and challenging problems in image processing. This paper briefly outlines the comprehensive of image processing and the implementation of image segmentation in eye redness.

  5. Cooperative processes in image segmentation

    Science.gov (United States)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  6. A Rapid Process for Fabricating Gas Sensors

    Directory of Open Access Journals (Sweden)

    Chun-Ching Hsiao

    2014-07-01

    Full Text Available Zinc oxide (ZnO is a low-toxicity and environmentally-friendly material applied on devices, sensors or actuators for “green” usage. A porous ZnO film deposited by a rapid process of aerosol deposition (AD was employed as the gas-sensitive material in a CO gas sensor to reduce both manufacturing cost and time, and to further extend the AD application for a large-scale production. The relative resistance change (△R/R of the ZnO gas sensor was used for gas measurement. The fabricated ZnO gas sensors were measured with operating temperatures ranging from 110 °C to 180 °C, and CO concentrations ranging from 100 ppm to 1000 ppm. The sensitivity and the response time presented good performance at increasing operating temperatures and CO concentrations. AD was successfully for applied for making ZnO gas sensors with great potential for achieving high deposition rates at low deposition temperatures, large-scale production and low cost.

  7. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  8. Industrial Applications of Image Processing

    Science.gov (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  9. Commercial aspects of rapid thermal processing (RTP)

    Energy Technology Data Exchange (ETDEWEB)

    Graham, R.G.; Huffman, D.R. [Ensyn Technologies Inc., Greely, ON (Canada)

    1996-12-31

    In its broadest sense, Rapid Thermal Processing (RTP{sup TM}) covers the conversion of all types of carbonaceous materials to liquid fuels, high quality fuel gases, and chemicals. Scientifically, it is based on the general premise that products which result from the extremely rapid application of heat to a given feedstock are inherently more valuable than those which are produced when heat is applied much more slowly over longer periods of processing time. Commercial RTP{sup TM} activities (including the actual implementation in the market as well as the short-term R and D initiatives) are much narrower in scope, and are focused on the production of high yields of light, non-tarry liquids (i.e. `bio-crude`) from biomass for fuel and chemical markets. Chemicals are of significant interest from an economical point of view since they typically have a higher value than fuel products. Liquid fuels are of interest for many reasons: (1) Liquid fuels do not have to be used immediately after production, such as is the case with hot combustion gases or combustible gases produced via gasification. This allows the decoupling of fuel production from the end-use (ie. the conversion of fuel to energy). (2) The higher energy density of liquid fuels vs. that of fuel gases and solid biomass results in a large reduction in the costs associated with storage and transportation. (3) The costs to retrofit an existing gas or oil fired combustion system are much lower than replacement with a solid fuel combustor. (4) In general, liquid fuel combustion is much more efficient, controllable, and cleaner than the combustion of solid fuels. (5) The production of liquid `bio-crude` permits the removal of ash from the biomass prior to combustion or other end-use applications. (6) Gas or liquid fuel-fired diesel or turbine engines cannot operate commercially on solid fuels. Although wood represents the biomass which is of principal commercial interest (including a vast array of wood residues

  10. [Imaging center - optimization of the imaging process].

    Science.gov (United States)

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Statistical Image Processing.

    Science.gov (United States)

    1982-11-16

    spectral analysist texture image analysis and classification, __ image software package, automatic spatial clustering.ITWA domenit hi ba apa for...ICOLOR(256),IBW(256) 1502 FORMATO (30( CNO(N): fF12.1)) 1503 FORMAT(o *FMINo DMRGE:0f2E20.8) 1504 FORMAT(/o IMRGE:or15) 1505 FOR14ATV FIRST SUBIMAGE:v...1506 FORMATO ’ JOIN CLUSTER NL:0) 1507 FORMAT( NEW CLUSTER:O) 1508 FORMAT( LLBS.GE.600) 1532 FORMAT(15XoTHETA ,7X, SIGMA-SQUAREr3Xe MERGING-DISTANCE

  12. Building country image process

    Directory of Open Access Journals (Sweden)

    Zubović Jovan

    2005-01-01

    Full Text Available The same branding principles are used for countries as they are used for the products, only the methods are different. Countries are competing among themselves in tourism, foreign investments and exports. Country turnover is at the level that the country's reputation is. The countries that begin as unknown or with a bad image will have limits in operations or they will be marginalized. As a result they will be at the bottom of the international influence scale. On the other hand, countries with a good image, like Germany (despite two world wars will have their products covered with a special "aura".

  13. Image Processing and Geographic Information

    Science.gov (United States)

    McLeod, Ronald G.; Daily, Julie; Kiss, Kenneth

    1985-12-01

    A Geographic Information System, which is a product of System Development Corporation's Image Processing System and a commercially available Data Base Management System, is described. The architecture of the system allows raster (image) data type, graphics data type, and tabular data type input and provides for the convenient analysis and display of spatial information. A variety of functions are supported through the Geographic Information System including ingestion of foreign data formats, image polygon encoding, image overlay, image tabulation, costatistical modelling of image and tabular information, and tabular to image conversion. The report generator in the DBMS is utilized to prepare quantitative tabular output extracted from spatially referenced images. An application of the Geographic Information System to a variety of data sources and types is highlighted. The application utilizes sensor image data, graphically encoded map information available from government sources, and statistical tables.

  14. SWNT Imaging Using Multispectral Image Processing

    Science.gov (United States)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  15. Missile signal processing common computer architecture for rapid technology upgrade

    Science.gov (United States)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application

  16. In-Situ Imaging of Particles during Rapid Thermite Deflagrations

    Science.gov (United States)

    Grapes, Michael; Reeves, Robert; Densmore, John; Fezzaa, Kamel; van Buuren, Tony; Willey, Trevor; Sullivan, Kyle

    2017-06-01

    The dynamic behavior of rapidly deflagrating thermites is a highly complex process involving rapid decomposition, melting, and outgassing of intermediate and/or product gases. Few experimental techniques are capable of probing these phenomena in situ due to the small length and time scales associated with the reaction. Here we use a recently developed extended burn tube test, where we initiate a small pile of thermite on the closed end of a clear acrylic tube. The length of the tube is sufficient to fully contain the reaction as it proceeds and flows entrained particles down the tube. This experiment was brought to the Advanced Photon Source, and the particle formation was X-ray imaged at various positions down the tube. Several formulations, as well as formulation parameters were varied to investigate the size and morphology of the particles, as well as to look for dynamic behavior attributed to the reaction. In all cases, we see evidence of particle coalescence and condensed-phase interfacial reactions. The results improve our understanding of the procession of reactants to products in these systems. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-691140.

  17. Rapid seismic reflection imaging in an urban environment

    Science.gov (United States)

    Liberty, L. M.

    2011-12-01

    Subsurface characterization in urban areas is important for city planners, municipalities, and engineers to estimate groundwater resources, track contaminants, assess earthquake or landslide hazards, and many other similar objectives. Improving geophysical imaging methods and results, while minimizing costs, provides greater opportunities for city/project planners and geophysicists alike to take advantage of the improved characterization afforded by the particular method. Seismic reflection results can provide hydrogeologic constraints for groundwater models, provide slip rate estimates for active faults, or simply map stratigraphy to provide target depth estimates. While many traditional urban seismic transects have included the use of vibroseis sources to improve reflection signals and attenuate cultural noise, low cost and high quality near-surface seismic reflection data can be obtained within an urban environment using impulsive sources at a variety of scales and at production rates that can significantly exceed those of swept sources. Sledgehammers and hydraulically powered accelerated weight drops allow rapid acquisition rates through dense urban corridors where the objective is to image targets in the upper one km depth range. In addition permit and land access issues; culturally noisy urban environments can provide additional challenges to producing high quality seismic reflection results. Acquisition methods designed to address both coherent and random noises include recording redundant, unstacked, unfiltered field records. Processing steps that improve data quality in this setting include diversity stacking to attenuate large-amplitude coherent (non-repeatable) vehicle noise and subtraction of power line signals via match filters to retain reflection signals near alternating current frequencies. These acquisition and processing approaches allow for rapid and low cost data acquisition at the expense of moderately increased computing time and disk space. I

  18. Fundamental Concepts of Digital Image Processing

    Science.gov (United States)

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  19. Fundamental concepts of digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  20. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  1. Image processing for optical mapping.

    Science.gov (United States)

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  2. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  3. Mapping soil heterogeneity using RapidEye satellite images

    Science.gov (United States)

    Piccard, Isabelle; Eerens, Herman; Dong, Qinghan; Gobin, Anne; Goffart, Jean-Pierre; Curnel, Yannick; Planchon, Viviane

    2016-04-01

    In the frame of BELCAM, a project funded by the Belgian Science Policy Office (BELSPO), researchers from UCL, ULg, CRA-W and VITO aim to set up a collaborative system to develop and deliver relevant information for agricultural monitoring in Belgium. The main objective is to develop remote sensing methods and processing chains able to ingest crowd sourcing data, provided by farmers or associated partners, and to deliver in return relevant and up-to-date information for crop monitoring at the field and district level based on Sentinel-1 and -2 satellite imagery. One of the developments within BELCAM concerns an automatic procedure to detect soil heterogeneity within a parcel using optical high resolution images. Such heterogeneity maps can be used to adjust farming practices according to the detected heterogeneity. This heterogeneity may for instance be caused by differences in mineral composition of the soil, organic matter content, soil moisture or soil texture. Local differences in plant growth may be indicative for differences in soil characteristics. As such remote sensing derived vegetation indices may be used to reveal soil heterogeneity. VITO started to delineate homogeneous zones within parcels by analyzing a series of RapidEye images acquired in 2015 (as a precursor for Sentinel-2). Both unsupervised classification (ISODATA, K-means) and segmentation techniques were tested. Heterogeneity maps were generated from images acquired at different moments during the season (13 May, 30 June, 17 July, 31 August, 11 September and 1 November 2015). Tests were performed using blue, green, red, red edge and NIR reflectances separately and using derived indices such as NDVI, fAPAR, CIrededge, NDRE2. The results for selected winter wheat, maize and potato fields were evaluated together with experts from the collaborating agricultural research centers. For a few fields UAV images and/or yield measurements were available for comparison.

  4. GStreamer as a framework for image processing applications in image fusion

    Science.gov (United States)

    Burks, Stephen D.; Doe, Joshua M.

    2011-05-01

    Multiple source band image fusion can sometimes be a multi-step process that consists of several intermediate image processing steps. Typically, each of these steps is required to be in a particular arrangement in order to produce a unique output image. GStreamer is an open source, cross platform multimedia framework, and using this framework, engineers at NVESD have produced a software package that allows for real time manipulation of processing steps for rapid prototyping in image fusion.

  5. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    Science.gov (United States)

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  6. Fuzzy image processing in sun sensor

    Science.gov (United States)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  7. Image Processing In Laser-Beam-Steering Subsystem

    Science.gov (United States)

    Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.

    1996-01-01

    Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

  8. Differential morphology and image processing.

    Science.gov (United States)

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  9. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  10. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate......-time data acquisition system. The system were implemented using the commercial available 2202 ProFocus BK Medical ultrasound scanner equipped with a research interface and a standard PC. The main feature of the system is the possibility to acquire several seconds of interleaved data, switching between...

  11. Digital processing of radiographic images

    Science.gov (United States)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  12. Rapid MR imaging of cryoprotectant permeation in an engineered dermal replacement.

    Science.gov (United States)

    Bidault, N P; Hammer, B E; Hubel, A

    2000-02-01

    Magnetic resonance (MR) imaging is a powerful technique for monitoring the permeation of cryoprotective agents (CPAs) inside tissues. However, the techniques published until now suffer from inherently long imaging times, limiting the application of these techniques to slow diffusion processes and large CPA concentrations. In this study, we present a rapid MR imaging technique based on a CHESS-FLASH scheme combined with Keyhole image acquisition. This technique can image the fast permeation of Me(2)SO solutions into freeze-dried artificial dermal replacements for concentrations down to 10% v/v. Special attention is given to evaluating the technique for quantitative analysis. Copyright 2000 Academic Press.

  13. Image processing of galaxy photographs

    Science.gov (United States)

    Arp, H.; Lorre, J.

    1976-01-01

    New computer techniques for analyzing and processing photographic images of galaxies are presented, with interesting scientific findings gleaned from the processed photographic data. Discovery and enhancement of very faint and low-contrast nebulous features, improved resolution of near-limit detail in nebulous and stellar images, and relative colors of a group of nebulosities in the field are attained by the methods. Digital algorithms, nonlinear pattern-recognition filters, linear convolution filters, plate averaging and contrast enhancement techniques, and an atmospheric deconvolution technique are described. New detail is revealed in images of NGC 7331, Stephan's Quintet, Seyfert's Sextet, and the jet in M87, via processes of addition of plates, star removal, contrast enhancement, standard deviation filtering, and computer ratioing to bring out qualitative color differences.

  14. Rapid neutron capture process in supernovae and chemical element formation

    NARCIS (Netherlands)

    Baruah, Rulee; Duorah, Kalpana; Duorah, H. L.

    2009-01-01

    The rapid neutron capture process (r-process) is one of the major nucleosynthesis processes responsible for the synthesis of heavy nuclei beyond iron. Isotopes beyond Fe are most exclusively formed in neutron capture processes and more heavier ones are produced by the r-process. Approximately half

  15. CMOS imagers from phototransduction to image processing

    CERN Document Server

    Etienne-Cummings, Ralph

    2004-01-01

    The idea of writing a book on CMOS imaging has been brewing for several years. It was placed on a fast track after we agreed to organize a tutorial on CMOS sensors for the 2004 IEEE International Symposium on Circuits and Systems (ISCAS 2004). This tutorial defined the structure of the book, but as first time authors/editors, we had a lot to learn about the logistics of putting together information from multiple sources. Needless to say, it was a long road between the tutorial and the book, and it took more than a few months to complete. We hope that you will find our journey worthwhile and the collated information useful. The laboratories of the authors are located at many universities distributed around the world. Their unifying theme, however, is the advancement of knowledge for the development of systems for CMOS imaging and image processing. We hope that this book will highlight the ideas that have been pioneered by the authors, while providing a roadmap for new practitioners in this field to exploit exc...

  16. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  17. Linear Algebra and Image Processing

    Science.gov (United States)

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  18. Rapid Prototyping and the Human Factors Engineering Process

    Science.gov (United States)

    2016-08-29

    Rapid prototyping and the human factors • • engineering process David Beevis* and Gaetan St Denist *Senior Human Factors Engineer, Defence and...qr-..2. 9 Rapid prototyping or ’virtual prototyping ’ of human-machine interfaces offers the possibility of putting the human operator ’in the loop...8217 without the effort and cost associated with conventional man-in-the-loop simulation. Advocates suggest that rapid prototyping is compatible with

  19. Viewpoints on Medical Image Processing: From Science to Application

    Science.gov (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  20. Rapid identification of salmonella serotypes with stereo and hyperspectral microscope imaging Methods

    Science.gov (United States)

    The hyperspectral microscope imaging (HMI) method can reduce detection time within 8 hours including incubation process. The early and rapid detection with this method in conjunction with the high throughput capabilities makes HMI method a prime candidate for implementation for the food industry. Th...

  1. Shuffled magnetization-prepared multicontrast rapid gradient-echo imaging.

    Science.gov (United States)

    Cao, Peng; Zhu, Xucheng; Tang, Shuyu; Leynes, Andrew; Jakary, Angela; Larson, Peder E Z

    2018-01-01

    To develop a novel acquisition and reconstruction method for magnetization-prepared 3-dimensional multicontrast rapid gradient-echo imaging, using Hankel matrix completion in combination with compressed sensing and parallel imaging. A random k-space shuffling strategy was implemented in simulation and in vivo human experiments at 7 T for 3-dimensional inversion recovery, T2 /diffusion preparation, and magnetization transfer imaging. We combined compressed sensing, based on total variation and spatial-temporal low-rank regularizations, and parallel imaging with pixel-wise Hankel matrix completion, allowing the reconstruction of tens of multicontrast 3-dimensional images from 3- or 6-min scans. The simulation result showed that the proposed method can reconstruct signal-recovery curves in each voxel and was robust for typical in vivo signal-to-noise ratio with 16-times acceleration. In vivo studies achieved 4 to 24 times accelerations for inversion recovery, T2 /diffusion preparation, and magnetization transfer imaging. Furthermore, the contrast was improved by resolving pixel-wise signal-recovery curves after magnetization preparation. The proposed method can improve acquisition efficiencies for magnetization-prepared MRI and tens of multicontrast 3-dimensional images could be recovered from a single scan. Furthermore, it was robust against noise, applicable for recovering multi-exponential signals, and did not require any previous knowledge of model parameters. Magn Reson Med 79:62-70, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Biomedical signal and image processing.

    Science.gov (United States)

    Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro

    2011-01-01

    Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly [1], [2]. This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.

  3. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu

    2014-08-01

    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  4. Rapid prototyping of biomimetic vascular phantoms for hyperspectral reflectance imaging.

    Science.gov (United States)

    Ghassemi, Pejhman; Wang, Jianting; Melchiorri, Anthony J; Ramella-Roman, Jessica C; Mathews, Scott A; Coburn, James C; Sorg, Brian S; Chen, Yu; Pfefer, T Joshua

    2015-01-01

    The emerging technique of rapid prototyping with three-dimensional (3-D) printers provides a simple yet revolutionary method for fabricating objects with arbitrary geometry. The use of 3-D printing for generating morphologically biomimetic tissue phantoms based on medical images represents a potentially major advance over existing phantom approaches. Toward the goal of image-defined phantoms, we converted a segmented fundus image of the human retina into a matrix format and edited it to achieve a geometry suitable for printing. Phantoms with vessel-simulating channels were then printed using a photoreactive resin providing biologically relevant turbidity, as determined by spectrophotometry. The morphology of printed vessels was validated by x-ray microcomputed tomography. Channels were filled with hemoglobin (Hb) solutions undergoing desaturation, and phantoms were imaged with a near-infrared hyperspectral reflectance imaging system. Additionally, a phantom was printed incorporating two disjoint vascular networks at different depths, each filled with Hb solutions at different saturation levels. Light propagation effects noted during these measurements—including the influence of vessel density and depth on Hb concentration and saturation estimates, and the effect of wavelength on vessel visualization depth—were evaluated. Overall, our findings indicated that 3-D-printed biomimetic phantoms hold significant potential as realistic and practical tools for elucidating light–tissue interactions and characterizing biophotonic system performance.

  5. Adaptive Robotic Welding Using A Rapid Image Pre-Processor

    Science.gov (United States)

    Dufour, M.; Begin, G.

    1984-02-01

    The rapid pre-processor initially developed by NRCC and Leigh Instruments Inc. as part of the visual aid system of the space shuttle arm 1 has been adapted to perform real time seam tracking of multipass butt weld and other adaptive welding functions. The weld preparation profile is first enhanced by a projected laser target formed by a line and dots. A standard TV camera is used to observe the target image at an angle. Displacement and distorsion of the target image on a monitor are simple functions of the preparation surface distance and shape respectively. Using the video signal, the pre-processor computes in real time the area and first moments of the white level figure contained within four independent rectangular windows in the field of view of the camera. The shape, size, and position of each window can be changed dynamically for each successive image at the standard 30 images/sec rate, in order to track some target image singularities. Visual sensing and welding are done simultaneously. As an example, it is shown that thin sheet metal welding can be automated using a single window for seam tracking, gap width measurement and torch height estimation. Using a second window, measurement of sheet misalignment and their orientation in space were also achieved. The system can be used at welding speed of up to 1 m/min. Simplicity, speed and effectiveness are the main advantages of this system.

  6. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  7. Rapid Prototyping of wax foundry models in an incremental process

    Directory of Open Access Journals (Sweden)

    B. Kozik

    2011-04-01

    Full Text Available The paper presents an analysis incremental methods of creating wax founding models. There are two methods of Rapid Prototypingof wax models in an incremental process which are more and more often used in industrial practice and in scientific research.Applying Rapid Prototyping methods in the process of making casts allows for acceleration of work on preparing prototypes. It isespecially important in case of element having complicated shapes. The time of making a wax model depending on the size and the appliedRP method may vary from several to a few dozen hours.

  8. Novel Applications of Rapid Prototyping in Gamma-ray and X-ray Imaging

    Science.gov (United States)

    Miller, Brian W.; Moore, Jared W.; Gehm, Michael E.; Furenlid, Lars R.; Barrett, Harrison H.

    2010-01-01

    Advances in 3D rapid-prototyping printers, 3D modeling software, and casting techniques allow for the fabrication of cost-effective, custom components in gamma-ray and x-ray imaging systems. Applications extend to new fabrication methods for custom collimators, pinholes, calibration and resolution phantoms, mounting and shielding components, and imaging apertures. Details of the fabrication process for these components are presented, specifically the 3D printing process, cold casting with a tungsten epoxy, and lost-wax casting in platinum. PMID:22984341

  9. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    Directory of Open Access Journals (Sweden)

    Yufu Qu

    2018-01-01

    Full Text Available In order to reconstruct three-dimensional (3D structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  10. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    Science.gov (United States)

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  11. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  12. Neural correlates of rapid spectrotemporal processing in musicians and nonmusicians.

    Science.gov (United States)

    Gaab, N; Tallal, P; Kim, H; Lakshminarayanan, K; Archie, J J; Glover, G H; Gabrieli, J D E

    2005-12-01

    Our results suggest that musical training alters the functional anatomy of rapid spectrotemporal processing, resulting in improved behavioral performance along with a more efficient functional network primarily involving traditional language regions. This finding may have important implications for improving language/reading skills, especially in children struggling with dyslexia.

  13. Scheduling algorithms for rapid imaging using agile Cubesat constellations

    Science.gov (United States)

    Nag, Sreeja; Li, Alan S.; Merrick, James H.

    2018-02-01

    Distributed Space Missions such as formation flight and constellations, are being recognized as important Earth Observation solutions to increase measurement samples over space and time. Cubesats are increasing in size (27U, ∼40 kg in development) with increasing capabilities to host imager payloads. Given the precise attitude control systems emerging in the commercial market, Cubesats now have the ability to slew and capture images within short notice. We propose a modular framework that combines orbital mechanics, attitude control and scheduling optimization to plan the time-varying, full-body orientation of agile Cubesats in a constellation such that they maximize the number of observed images and observation time, within the constraints of Cubesat hardware specifications. The attitude control strategy combines bang-bang and PD control, with constraints such as power consumption, response time, and stability factored into the optimality computations and a possible extension to PID control to account for disturbances. Schedule optimization is performed using dynamic programming with two levels of heuristics, verified and improved upon using mixed integer linear programming. The automated scheduler is expected to run on ground station resources and the resultant schedules uplinked to the satellites for execution, however it can be adapted for onboard scheduling, contingent on Cubesat hardware and software upgrades. The framework is generalizable over small steerable spacecraft, sensor specifications, imaging objectives and regions of interest, and is demonstrated using multiple 20 kg satellites in Low Earth Orbit for two case studies - rapid imaging of Landsat's land and coastal images and extended imaging of global, warm water coral reefs. The proposed algorithm captures up to 161% more Landsat images than nadir-pointing sensors with the same field of view, on a 2-satellite constellation over a 12-h simulation. Integer programming was able to verify that

  14. Easy Leaf Area: Automated Digital Image Analysis for Rapid and Accurate Measurement of Leaf Area

    Directory of Open Access Journals (Sweden)

    Hsien Ming Easlon

    2014-07-01

    Full Text Available Premise of the study: Measurement of leaf areas from digital photographs has traditionally required significant user input unless backgrounds are carefully masked. Easy Leaf Area was developed to batch process hundreds of Arabidopsis rosette images in minutes, removing background artifacts and saving results to a spreadsheet-ready CSV file. Methods and Results: Easy Leaf Area uses the color ratios of each pixel to distinguish leaves and calibration areas from their background and compares leaf pixel counts to a red calibration area to eliminate the need for camera distance calculations or manual ruler scale measurement that other software methods typically require. Leaf areas estimated by this software from images taken with a camera phone were more accurate than ImageJ estimates from flatbed scanner images. Conclusions: Easy Leaf Area provides an easy-to-use method for rapid measurement of leaf area and nondestructive estimation of canopy area from digital images.

  15. Large area super-resolution chemical imaging via rapid dithering of a nanoprobe

    Science.gov (United States)

    Languirand, Eric R.; Cullum, Brian M.

    2015-05-01

    Super-resolution chemical imaging via Raman spectroscopy provides a significant ability to simultaneously or pseudosimultaneously monitor numerous label-free analytes while elucidating their spatial distribution on the surface of the sample. However, spontaneous Raman is an inherently weak phenomenon making trace detection and thus superresolution imaging extremely difficult, if not impossible. To circumvent this and allow for trace detection of the few chemical species present in any sub-diffraction limited resolution element of an image, we have developed a surface enhanced Raman scattering (SERS) coherent fiber-optic imaging bundle probe consisting of 30,000 individual fiber elements. When the probes are tapered, etched and coated with metal, they provide circular Raman chemical images of a sample with a field of view of approximately 20μm (i.e. diameter) via the array of 30,000 individual 50 nm fiber elements. An acousto-optic tunable filter is used to rapidly scan or select discrete frequencies for multi- or hyperspectral analysis. Although the 50nm fiber element dimensions of this probe inherently provide spatial resolutions of approximately 100nm, further increases in the spatial resolution can be achieved by using a rapid dithering process. Using this process, additional images are obtained one-half fiber diameter translations in the x- and y- planes. A piezostage drives the movement, providing the accurate and reproducible shifts required for dithering. Optimal probability algorithms are then used to deconvolute the related images producing a final image with a three-fold increase in spatial resolution. This paper describes super-resolution chemical imaging using these probes and the dithering method as well as its potential applications in label-free imaging of lipid rafts and other applications within biology and forensics.

  16. Eliminating "Hotspots" in Digital Image Processing

    Science.gov (United States)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  17. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  18. Introduction to image processing and analysis

    CERN Document Server

    Russ, John C

    2007-01-01

    ADJUSTING PIXEL VALUES Optimizing Contrast Color Correction Correcting Nonuniform Illumination Geometric Transformations Image Arithmetic NEIGHBORHOOD OPERATIONS Convolution Other Neighborhood Operations Statistical Operations IMAGE PROCESSING IN THE FOURIER DOMAIN The Fourier Transform Removing Periodic Noise Convolution and Correlation Deconvolution Other Transform Domains Compression BINARY IMAGES Thresholding Morphological Processing Other Morphological Operations Boolean Operations MEASUREMENTS Global Measurements Feature Measurements Classification APPENDIX: SOFTWARE REFERENCES AND LITERATURE INDEX.

  19. Applications Of Image Processing In Criminalistics

    Science.gov (United States)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng

    1987-01-01

    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  20. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika

    2009-01-01

    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  1. Optoelectronic imaging of speckle using image processing method

    Science.gov (United States)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  2. Combining image-processing and image compression schemes

    Science.gov (United States)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  3. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  4. Digital image processing techniques in archaeology

    Digital Repository Service at National Institute of Oceanography (India)

    Santanam, K.; Vaithiyanathan, R.; Tripati, S.

    Digital image processing involves the manipulation and interpretation of digital images with the aid of a computer. This form of remote sensing actually began in the 1960's with a limited number of researchers analysing multispectral scanner data...

  5. Programmable remapper for image processing

    Science.gov (United States)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  6. Digital image capture and rapid prototyping of the maxillofacial defect.

    Science.gov (United States)

    Sabol, Jennifer V; Grant, Gerald T; Liacouras, Peter; Rouse, Stephen

    2011-06-01

    In order to restore an extraoral maxillofacial defect, a moulage impression is commonly made with traditional impression materials. This technique has some disadvantages, including distortion of the site due to the weight of the impression material, changes in tissue location with modifications of the patient position, and the length of time and discomfort for the patient due to the impression procedure and materials used. The use of the commercially available 3dMDface™ System creates 3D images of soft tissues to form an anatomically accurate 3D surface image. Rapid prototyping converts the virtual designs from the 3dMDface™ System into a physical model by converting the data to a ZPrint (ZPR) CAD format file and a stereolithography (STL) file. The data, in conjunction with a Zprinter(®) 450 or a Stereolithography Apparatus (SLA), can be used to fabricate a model for prosthesis fabrication, without the disadvantages of the standard moulage technique. This article reviews this technique and how it can be applied to maxillofacial prosthetics. © 2011 by The American College of Prosthodontists.

  7. Amplitude image processing by diffractive optics.

    Science.gov (United States)

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  8. Image processing in diabetic related causes

    CERN Document Server

    Kumar, Amit

    2016-01-01

    This book is a collection of all the experimental results and analysis carried out on medical images of diabetic related causes. The experimental investigations have been carried out on images starting from very basic image processing techniques such as image enhancement to sophisticated image segmentation methods. This book is intended to create an awareness on diabetes and its related causes and image processing methods used to detect and forecast in a very simple way. This book is useful to researchers, Engineers, Medical Doctors and Bioinformatics researchers.

  9. Rapid MR spectroscopic imaging of lactate using compressed sensing

    Science.gov (United States)

    Vidya Shankar, Rohini; Agarwal, Shubhangi; Geethanath, Sairam; Kodibagkar, Vikram D.

    2015-03-01

    Imaging lactate metabolism in vivo may improve cancer targeting and therapeutics due to its key role in the development, maintenance, and metastasis of cancer. The long acquisition times associated with magnetic resonance spectroscopic imaging (MRSI), which is a useful technique for assessing metabolic concentrations, are a deterrent to its routine clinical use. The objective of this study was to combine spectral editing and prospective compressed sensing (CS) acquisitions to enable precise and high-speed imaging of the lactate resonance. A MRSI pulse sequence with two key modifications was developed: (1) spectral editing components for selective detection of lactate, and (2) a variable density sampling mask for pseudo-random under-sampling of the k-space `on the fly'. The developed sequence was tested on phantoms and in vivo in rodent models of cancer. Datasets corresponding to the 1X (fully-sampled), 2X, 3X, 4X, 5X, and 10X accelerations were acquired. The under-sampled datasets were reconstructed using a custom-built algorithm in MatlabTM, and the fidelity of the CS reconstructions was assessed in terms of the peak amplitudes, SNR, and total acquisition time. The accelerated reconstructions demonstrate a reduction in the scan time by up to 90% in vitro and up to 80% in vivo, with negligible loss of information when compared with the fully-sampled dataset. The proposed unique combination of spectral editing and CS facilitated rapid mapping of the spatial distribution of lactate at high temporal resolution. This technique could potentially be translated to the clinic for the routine assessment of lactate changes in solid tumors.

  10. Imperceptibly rapid contrast modulations processed in cortex: Evidence from psychophysics.

    Science.gov (United States)

    Falconbridge, Michael; Ware, Adam; MacLeod, Donald I A

    2010-07-01

    Rapid fluctuations in contrast are common in our modern visual environment. They arise, for example, in a room lit by a fluorescent light, when viewing a CRT computer monitor and when watching a movie in a cinema. As we are unconscious of the rapid changes, it has been assumed that they do not affect the operation of our visual systems. By periodically reversing the contrast of a fixed pattern at a rapid rate we render the pattern itself, as well as the modulations, invisible to observers. We show that exposure to these rapidly contrast-modulated patterns alters the way subsequent stationary patterns are processed; patterns similar to the contrast-modulated pattern require more contrast to be detected than dissimilar patterns. We present evidence that the changes are cortically mediated. Taken together, our findings suggest that cortical stages of the visual system respond to the individual frames of a contrast-reversed sequence, even at rates as high as 160 frames per second.

  11. A rapid automatic analyzer and its methodology for effective bentonite content based on image recognition technology

    Directory of Open Access Journals (Sweden)

    Wei Long

    2016-09-01

    Full Text Available Fast and accurate determination of effective bentonite content in used clay bonded sand is very important for selecting the correct mixing ratio and mixing process to obtain high-performance molding sand. Currently, the effective bentonite content is determined by testing the ethylene blue absorbed in used clay bonded sand, which is usually a manual operation with some disadvantages including complicated process, long testing time and low accuracy. A rapid automatic analyzer of the effective bentonite content in used clay bonded sand was developed based on image recognition technology. The instrument consists of auto stirring, auto liquid removal, auto titration, step-rotation and image acquisition components, and processor. The principle of the image recognition method is first to decompose the color images into three-channel gray images based on the photosensitive degree difference of the light blue and dark blue in the three channels of red, green and blue, then to make the gray values subtraction calculation and gray level transformation of the gray images, and finally, to extract the outer circle light blue halo and the inner circle blue spot and calculate their area ratio. The titration process can be judged to reach the end-point while the area ratio is higher than the setting value.

  12. Digital signal processing techniques and applications in radar image processing

    CERN Document Server

    Wang, Bu-Chin

    2008-01-01

    A self-contained approach to DSP techniques and applications in radar imagingThe processing of radar images, in general, consists of three major fields: Digital Signal Processing (DSP); antenna and radar operation; and algorithms used to process the radar images. This book brings together material from these different areas to allow readers to gain a thorough understanding of how radar images are processed.The book is divided into three main parts and covers:* DSP principles and signal characteristics in both analog and digital domains, advanced signal sampling, and

  13. Semi-automated Image Processing for Preclinical Bioluminescent Imaging.

    Science.gov (United States)

    Slavine, Nikolai V; McColl, Roderick W

    Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.

  14. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    Several fingerprint matching algorithms have been developed for minutiae or template matching of fingerprint templates. The efficiency of these fingerprint matching algorithms depends on the success of the image processing and features extraction steps employed. Fingerprint image processing and analysis is hence an ...

  15. Imaging inflammation in mouse colon using a rapid stage-scanning confocal fluorescence microscope.

    Science.gov (United States)

    Saldua, Meagan A; Olsovsky, Cory A; Callaway, Evelyn S; Chapkin, Robert S; Maitland, Kristen C

    2012-01-01

    Large area confocal microscopy may provide fast, high-resolution image acquisition for evaluation of tissue in pre-clinical studies with reduced tissue processing in comparison to histology. We present a rapid beam and stage-scanning confocal fluorescence microscope to image cellular and tissue features along the length of the entire excised mouse colon. The beam is scanned at 8,333 lines/sec by a polygon scanning mirror while the specimen is scanned in the orthogonal axis by a motorized translation stage with a maximum speed of 7 mm/sec. A single 1 × 60 mm(2) field of view image spanning the length of the mouse colon is acquired in 10 s. Z-projection images generated from axial image stacks allow high resolution imaging of the surface of non-flat specimens. In contrast to the uniform size, shape, and distribution of colon crypts in confocal images of normal colon, confocal images of chronic bowel inflammation exhibit heterogeneous tissue structure with localized severe crypt distortion.

  16. The Java Image Science Toolkit (JIST) for Rapid Prototyping and Publishing of Neuroimaging Software

    Science.gov (United States)

    Lucas, Blake C.; Bogovic, John A.; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.; Pham, Dzung

    2010-01-01

    Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC). PMID:20077162

  17. An overview of medical image processing methods

    African Journals Online (AJOL)

    USER

    2010-06-14

    Jun 14, 2010 ... images through computer simulations has already in- creased the interests of many researchers. 3D image rendering usually refers to the analysis of the ..... Digital Image Processing. Reading,. MA: Addison-Wesley Publishing Company. Gose E, Johnsonbaugh R, Jost S (1996). Pattern Recognition and.

  18. Featured Image: Making a Rapidly Rotating Black Hole

    Science.gov (United States)

    Kohler, Susanna

    2017-10-01

    These stills from a simulation show the evolution (from left to right and top to bottom) of a high-mass X-ray binary over 1.1 days, starting after the star on the right fails to explode as a supernova and then collapses into a black hole. Many high-mass X-ray binaries like the well-known Cygnus X-1, the first source widely accepted to be a black hole host rapidly spinning black holes. Despite our observations of these systems, however, were still not sure how these objects end up with such high rotation speeds. Using simulations like that shown above, a team of scientists led by Aldo Batta (UC Santa Cruz) has demonstrated how a failed supernova explosion can result in such a rapidly spinning black hole. The authors work shows that in a binary where one star attempts to explode as a supernova and fails it doesnt succeed in unbinding the star the large amount of fallback material can interact with the companion star and then accrete onto the black hole, spinning it up in the process. You can read more about the authors simulations and conclusions in the paper below.CitationAldo Batta et al 2017 ApJL 846 L15. doi:10.3847/2041-8213/aa8506

  19. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang

    2014-01-01

    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  20. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance...

  1. Highly Rapid Amplification-Free and Quantitative DNA Imaging Assay

    Science.gov (United States)

    Klamp, Tobias; Camps, Marta; Nieto, Benjamin; Guasch, Francesc; Ranasinghe, Rohan T.; Wiedemann, Jens; Petrášek, Zdeněk; Schwille, Petra; Klenerman, David; Sauer, Markus

    2013-01-01

    There is an urgent need for rapid and highly sensitive detection of pathogen-derived DNA in a point-of-care (POC) device for diagnostics in hospitals and clinics. This device needs to work in a ‘sample-in-result-out’ mode with minimum number of steps so that it can be completely integrated into a cheap and simple instrument. We have developed a method that directly detects unamplified DNA, and demonstrate its sensitivity on realistically sized 5 kbp target DNA fragments of Micrococcus luteus in small sample volumes of 20 μL. The assay consists of capturing and accumulating of target DNA on magnetic beads with specific capture oligonucleotides, hybridization of complementary fluorescently labeled detection oligonucleotides, and fluorescence imaging on a miniaturized wide-field fluorescence microscope. Our simple method delivers results in less than 20 minutes with a limit of detection (LOD) of ~5 pM and a linear detection range spanning three orders of magnitude. PMID:23677392

  2. Non-linear Post Processing Image Enhancement

    Science.gov (United States)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  3. Quantitative image processing in fluid mechanics

    Science.gov (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  4. Onboard Radar Processing Development for Rapid Response Applications

    Science.gov (United States)

    Lou, Yunling; Chien, Steve; Clark, Duane; Doubleday, Josh; Muellerschoen, Ron; Wang, Charles C.

    2011-01-01

    We are developing onboard processor (OBP) technology to streamline data acquisition on-demand and explore the potential of the L-band SAR instrument onboard the proposed DESDynI mission and UAVSAR for rapid response applications. The technology would enable the observation and use of surface change data over rapidly evolving natural hazards, both as an aid to scientific understanding and to provide timely data to agencies responsible for the management and mitigation of natural disasters. We are adapting complex science algorithms for surface water extent to detect flooding, snow/water/ice classification to assist in transportation/ shipping forecasts, and repeat-pass change detection to detect disturbances. We are near completion of the development of a custom FPGA board to meet the specific memory and processing needs of L-band SAR processor algorithms and high speed interfaces to reformat and route raw radar data to/from the FPGA processor board. We have also developed a high fidelity Matlab model of the SAR processor that is modularized and parameterized for ease to prototype various SAR processor algorithms targeted for the FPGA. We will be testing the OBP and rapid response algorithms with UAVSAR data to determine the fidelity of the products.

  5. Water surface capturing by image processing

    Science.gov (United States)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  6. Automatic processing, analysis, and recognition of images

    Science.gov (United States)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  7. Image processing and communications challenges 5

    CERN Document Server

    2014-01-01

    This textbook collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts. Part I deals with image processing. A comprehensive survey of different methods  of image processing, computer vision  is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered. In conclusion, the edited book comprises papers on diverse aspects of image processing  and communications systems. There are theoretical aspects as well as application papers.

  8. The method of parallel-hierarchical transformation for rapid recognition of dynamic images using GPGPU technology

    Science.gov (United States)

    Timchenko, Leonid; Yarovyi, Andrii; Kokriatskaya, Nataliya; Nakonechna, Svitlana; Abramenko, Ludmila; Ławicki, Tomasz; Popiel, Piotr; Yesmakhanova, Laura

    2016-09-01

    The paper presents a method of parallel-hierarchical transformations for rapid recognition of dynamic images using GPU technology. Direct parallel-hierarchical transformations based on cluster CPU-and GPU-oriented hardware platform. Mathematic models of training of the parallel hierarchical (PH) network for the transformation are developed, as well as a training method of the PH network for recognition of dynamic images. This research is most topical for problems on organizing high-performance computations of super large arrays of information designed to implement multi-stage sensing and processing as well as compaction and recognition of data in the informational structures and computer devices. This method has such advantages as high performance through the use of recent advances in parallelization, possibility to work with images of ultra dimension, ease of scaling in case of changing the number of nodes in the cluster, auto scan of local network to detect compute nodes.

  9. Rapid process for manufacturing of aluminum nitride powder

    Energy Technology Data Exchange (ETDEWEB)

    Weimer, A.W.; Cochran, G.A.; Eisman, G.A.; Henley, J.P.; Hook, B.D.; Mills, L.K. [Dow Chemical Co., Midland, MI (United States). Ceramics and Advanced Materials Research; Guiton, T.A.; Knudsen, A.K.; Nicholas, R.N.; Volmering, J.E.; Moore, W.G. [Dow Chemical Co., Midland, MI (United States). Advanced Ceramics Lab.

    1994-01-01

    A rapid, direct nitridation process for the manufacture of sinterable aluminum nitride (AIN) powder was developed at the pilot scale. Atomized aluminum metal and nitrogen gas were heated and reacted rapidly to synthesize AIN while they passed through the reaction zone of a transport flow reactor. The heated walls of the reactor simultaneously initiated the reaction and removed the generated heat to control the exotherm. Several variations of the process were required to achieve high conversion and reduce wall deposition of the product. The fine AIN powder produced did not require a postreaction grinding step to reduce particle size. However, a secondary heat treatment, following a mild milling step to expose fresh surface, was necessary to ensure complete conversion of the aluminum. In some instances, a final air classification step to remove large particles was necessary to promote densification by pressure less sintering. The AIN powder produced was pressure less sintered with 3 wt% yttria to fabricate fully dense parts which exhibited high thermal conductivity. The powder was shown to be less sinterable than commercially available carbothermally produced powders

  10. Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis.

    Science.gov (United States)

    Markiewicz, P J; Thielemans, K; Schott, J M; Atkinson, D; Arridge, S R; Hutton, B F; Ourselin, S

    2016-07-07

    In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of (18)F-florbetapir using the Siemens Biograph mMR scanner.

  11. Image processing for cameras with fiber bundle image relay.

    Science.gov (United States)

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection.

  12. Cellular automata in image processing and geometry

    CERN Document Server

    Adamatzky, Andrew; Sun, Xianfang

    2014-01-01

    The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...

  13. Dopamine transporter imaging in rapid eye movement sleep behavior disorder

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yu Kyeong; Yoon, In Young; Kim, Jong Min; Jeong, Seok Hoon; Kim, Ji Sun; Lee, Byung Chul; Lee, Won Woo; Kim, Sang Eun [Seoul National Univ. College of Medicine, Seoul (Korea, Republic of)

    2007-07-01

    The pathogenesis of rapid eye movement (REM) sleep behavior disorder (RBD) is still unknown. However, involvement of dopaminergic system in RBD has been hypothesized because of frequent association with degenerative movement disorders such as Parkinson's disease. The purpose of this study was to examine the extent and pattern of loss of dopamine transporter in RBD using FP-CIT SPECT. Fourteen patient with idiopathic RBD (mean age:665 yrs, M:F=10:3) participated in this study. Polysonmography confirmed loss of REM atonia and determined RBD severities by amount of tonic/phasic muscle activity during REM sleep in all cases. To compare with RBD, 14 early idiopathic Parkinson's disease rated as Hoehn and Yahr stage 1 (IPD) and 12 healthy controls were also selected. All participants performed single-photon emission computed tomography (SPECT) imaging 3 hours after injection of [123I]FP-CIT. Regions of interest were drawn on bilateral caudate and putamen, whole striatum and occipital cortex. Specific binding for dopamine transporters (DAT) were calculated using region to occipital uptake ratio based on the transient equilibrium method. Overall mean of DAT density in the striatum was lower in RBD group than controls, and higher than IPD group, However, DAT density in most individual RBD was still within normal range, and total striatal DAT density was not correlated with severity of RBD. Meanwhile, the caudate to putamen uptake ratio (C/P ratio) in RBD group was insignificantly higher than those in healthy controls. Nevertheless, C/P ratio within RBD group was reversely correlated with the RBD severity. Our study suggested that nigrostriatal dopaminergic degeneration could be a part of the pathogenesis of RBD, but not essential for the development of RBD. Further longitudinal evaluation of presynaptic dopaminergic system in idiopathic RBD may guarantee the more understanding for RBD and associated neurodegenerative disease.

  14. On some applications of diffusion processes for image processing

    Energy Technology Data Exchange (ETDEWEB)

    Morfu, S., E-mail: smorfu@u-bourgogne.f [Laboratoire d' Electronique, Informatique et Image (LE2i), UMR Cnrs 5158, Aile des Sciences de l' Ingenieur, BP 47870, 21078 Dijon Cedex (France)

    2009-06-29

    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  15. ARTIP: Automated Radio Telescope Image Processing Pipeline

    Science.gov (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  16. Digital image processing on a small computer system

    Science.gov (United States)

    Danielson, R.

    1981-01-01

    A minicomputer-based image processing facility provides a relatively low-cost entry point for education about image analysis applications in remote sensing. While a minicomputer has sufficient processing power to produce results quite rapidly for low volumes of small images, it does not have sufficient power to perform CPU- or 1/0-bound tasks on large images. A system equipped with a display terminal is ideally suited for interactive tasks. Software procurement is a limiting factor for most end users, and software availability may well be the overriding consideration in selecting a particular hardware configuration. The hardware chosen should be selected to be compatible with the software and with concern for future expansion.

  17. Applications of Digital Image Processing 11

    Science.gov (United States)

    Cho, Y. -C.

    1988-01-01

    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  18. NASA's Genesis and Rapid Intensification Processes (GRIP) Field Experiment

    Science.gov (United States)

    Braun, Scott A.; Kakar, Ramesh; Zipser, Edward; Heymsfield, Gerald; Albers, Cerese; Brown, Shannon; Durden, Stephen; Guimond, Stephen; Halverson, Jeffery; Heymsfield, Andrew; hide

    2013-01-01

    In August–September 2010, NASA, NOAA, and the National Science Foundation (NSF) conducted separate but closely coordinated hurricane field campaigns, bringing to bear a combined seven aircraft with both new and mature observing technologies. NASA's Genesis and Rapid Intensification Processes (GRIP) experiment, the subject of this article, along with NOAA's Intensity Forecasting Experiment (IFEX) and NSF's Pre-Depression Investigation of Cloud-Systems in the Tropics (PREDICT) experiment, obtained unprecedented observations of the formation and intensification of tropical cyclones. The major goal of GRIP was to better understand the physical processes that control hurricane formation and intensity change, specifically the relative roles of environmental and inner-core processes. A key focus of GRIP was the application of new technologies to address this important scientific goal, including the first ever use of the unmanned Global Hawk aircraft for hurricane science operations. NASA and NOAA conducted coordinated flights to thoroughly sample the rapid intensification (RI) of Hurricanes Earl and Karl. The tri-agency aircraft teamed up to perform coordinated flights for the genesis of Hurricane Karl and Tropical Storm Matthew and the non-redevelopment of the remnants of Tropical Storm Gaston. The combined GRIP–IFEX–PREDICT datasets, along with remote sensing data from a variety of satellite platforms [Geostationary Operational Environmental Satellite (GOES), Tropical Rainfall Measuring Mission (TRMM), Aqua, Terra, CloudSat, and Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO)], will contribute to advancing understanding of hurricane formation and intensification. This article summarizes the GRIP experiment, the missions flown, and some preliminary findings.

  19. Thermodynamic properties of pulverized coal during rapid heating devolatilization processes

    Energy Technology Data Exchange (ETDEWEB)

    Proscia, W.M.; Freihaut, J.D. [United Technologies Research Center, E. Hartford, CT (United States); Rastogi, S.; Klinzing, G.E. [Univ. of Pittsburg, PA (United States)

    1994-07-01

    The thermodynamic properties of coal under conditions of rapid heating have been determined using a combination of UTRC facilities including a proprietary rapid heating rate differential thermal analyzer (RHR-DTA), a microbomb calorimeter (MBC), an entrained flow reactor (EFR), an elemental analyzer (EA), and a FT-IR. The total heat of devolatilization, was measured for a HVA bituminous coal (PSOC 1451D, Pittsburgh No. 8) and a LV bituminous coal (PSOC 1516D, Lower Kittaning). For the HVA coal, the contributions of each of the following components to the overall heat of devolatilization were measured: the specific heat of coal/char during devolatilization, the heat of thermal decomposition of the coal, the specific heat capacity of tars, and the heat of vaporization of tars. Morphological characterization of coal and char samples was performed at the University of Pittsburgh using a PC-based image analysis system, BET apparatus, helium pcynometer, and mercury porosimeter. The bulk density, true density, CO{sub 2} surface area, pore volume distribution, and particle size distribution as a function of extent of reaction are reported for both the HVA and LV coal. Analyses of the data were performed to obtain the fractal dimension of the particles as well as estimates for the external surface area. The morphological data together with the thermodynamic data obtained in this investigation provides a complete database for a set of common, well characterized coal and char samples. This database can be used to improve the prediction of particle temperatures in coal devolatilization models. Such models are used both to obtain kinetic rates from fundamental studies and in predicting furnace performance with comprehensive coal combustion codes. Recommendations for heat capacity functions and heats of devolatilization for the HVA and LV coals are given. Results of sample particle temperature calculations using the recommended thermodynamic properties are provided.

  20. Perceptual processing of natural scenes at rapid rates: effects of complexity, content, and emotional arousal.

    Science.gov (United States)

    Löw, Andreas; Bradley, Margaret M; Lang, Peter J

    2013-12-01

    During rapid serial visual presentation (RSVP), the perceptual system is confronted with a rapidly changing array of sensory information demanding resolution. At rapid rates of presentation, previous studies have found an early (e.g., 150-280 ms) negativity over occipital sensors that is enhanced when emotional, as compared with neutral, pictures are viewed, suggesting facilitated perception. In the present study, we explored how picture composition and the presence of people in the image affect perceptual processing of pictures of natural scenes. Using RSVP, pictures that differed in perceptual composition (figure-ground or scenes), content (presence of people or not), and emotional content (emotionally arousing or neutral) were presented in a continuous stream for 330 ms each with no intertrial interval. In both subject and picture analyses, all three variables affected the amplitude of occipital negativity, with the greatest enhancement for figure-ground compositions (as compared with scenes), irrespective of content and emotional arousal, supporting an interpretation that ease of perceptual processing is associated with enhanced occipital negativity. Viewing emotional pictures prompted enhanced negativity only for pictures that depicted people, suggesting that specific features of emotionally arousing images are associated with facilitated perceptual processing, rather than all emotional content.

  1. Imaging process and VIP engagement

    Directory of Open Access Journals (Sweden)

    Starčević Slađana

    2007-01-01

    Full Text Available It's often quoted that celebrity endorsement advertising has been recognized as "an ubiquitous feature of the modern marketing". The researches have shown that this kind of engagement has been producing significantly more favorable reactions of consumers, that is, a higher level of an attention for the advertising messages, a better recall of the message and a brand name, more favorable evaluation and purchasing intentions of the brand, in regard to engagement of the non-celebrity endorsers. A positive influence on a firm's profitability and prices of stocks has also been shown. Therefore marketers leaded by the belief that celebrities represent the effective ambassadors in building of positive brand image or company image and influence an improvement of the competitive position, invest enormous amounts of money for signing the contracts with them. However, this strategy doesn't guarantee success in any case, because it's necessary to take into account many factors. This paper summarizes the results of previous researches in this field and also the recommendations for a more effective use of this kind of advertising.

  2. Rapid Texture Optimization of Three-Dimensional Urban Model Based on Oblique Images.

    Science.gov (United States)

    Zhang, Weilong; Li, Ming; Guo, Bingxuan; Li, Deren; Guo, Ge

    2017-04-20

    Seamless texture mapping is one of the key technologies for photorealistic 3D texture reconstruction. In this paper, a method of rapid texture optimization of 3D urban reconstruction based on oblique images is proposed aiming at the existence of texture fragments, seams, and inconsistency of color in urban 3D texture mapping based on low-altitude oblique images. First, we explore implementing radiation correction on the experimental images with a radiation procession algorithm. Then, an efficient occlusion detection algorithm based on OpenGL is proposed according to the mapping relation between the terrain triangular mesh surface and the images to implement the occlusion detection of the visible texture on the triangular facets as well as create a list of visible images. Finally, a texture clustering algorithm is put forward based on Markov Random Field utilizing the inherent attributes of the images and solve the energy function minimization by Graph-Cuts. The experimental results display that the method is capable of decreasing the existence of texture fragments, seams, and inconsistency of color in the 3D texture model reconstruction.

  3. Crack Length Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    1990-01-01

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better then that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  4. Crack Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal, Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better than that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  5. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  6. Lung Cancer Detection Using Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Mokhled S. AL-TARAWNEH

    2012-08-01

    Full Text Available Recently, image processing techniques are widely used in several medical areas for image improvement in earlier detection and treatment stages, where the time factor is very important to discover the abnormality issues in target images, especially in various cancer tumours such as lung cancer, breast cancer, etc. Image quality and accuracy is the core factors of this research, image quality assessment as well as improvement are depending on the enhancement stage where low pre-processing techniques is used based on Gabor filter within Gaussian rules. Following the segmentation principles, an enhanced region of the object of interest that is used as a basic foundation of feature extraction is obtained. Relying on general features, a normality comparison is made. In this research, the main detected features for accurate images comparison are pixels percentage and mask-labelling.

  7. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  8. The NPS Virtual Thermal Image processing model

    OpenAIRE

    Kenter, Yucel.

    2001-01-01

    A new virtual thermal image-processing model that has been developed at the Naval Postgraduate School is introduced in this thesis. This visualization program is based on an earlier work, the Visibility MRTD model, which is focused on predicting the minimum resolvable temperature difference (MRTD). The MRTD is a standard performance measure for forward-looking infrared (FLIR) imaging systems. It takes into account thermal imaging system modeling concerns, such as modulation transfer functions...

  9. Digital Image Processing in Private Industry.

    Science.gov (United States)

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  10. Mapping spatial patterns with morphological image processing

    Science.gov (United States)

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham

    2006-01-01

    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  11. Selections from 2017: Image Processing with AstroImageJ

    Science.gov (United States)

    Kohler, Susanna

    2017-12-01

    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry

  12. Decreasing pressure ulcer risk during hospital procedures: a rapid process improvement workshop.

    Science.gov (United States)

    Haugen, Vicki; Pechacek, Judy; Maher, Travis; Wilde, Joy; Kula, Larry; Powell, Julie

    2011-01-01

    A 300-bed acute care community hospital used a 2-day "Rapid Process Improvement Workshop" to identify factors contributing to facility-acquired pressure ulcers (PU). The Rapid Process Improvement Workshop included key stakeholders from all procedural areas providing inpatient services and used standard components of rapid process improvement: data analysis, process flow charting, factor identification, and action plan development.On day 1, the discovery process revealed increased PU risk related to prolonged immobility when transporting patients for procedures, during imaging studies, and during the perioperative period. On day 2, action plans were developed that included communication of PU risk or presence of an ulcer,measures to shorten procedure times when clinically appropriate, implementation of prevention techniques during procedures, and recommendations for mattress upgrades. In addition, educational programs about PU prevention were developed, schedules for presentations were established, and an online power point presentation was completed and placed in a learning management system module. Finally, our nursing department amended a hospital wide handoff communication tool to include skin status and PU risk level. This tool is used in all patient handoff situations, including nonnursing departments such as radiology. Patients deemed at risk for ulcers were provided "Braden Risk" armbands to enhance interdepartmental awareness.

  13. Checking Fits With Digital Image Processing

    Science.gov (United States)

    Davis, R. M.; Geaslen, W. D.

    1988-01-01

    Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

  14. Imaging partons in exclusive scattering processes

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus

    2012-06-15

    The spatial distribution of partons in the proton can be probed in suitable exclusive scattering processes. I report on recent performance estimates for parton imaging at a proposed Electron-Ion Collider.

  15. Recent developments in digital image processing at the Image Processing Laboratory of JPL.

    Science.gov (United States)

    O'Handley, D. A.

    1973-01-01

    Review of some of the computer-aided digital image processing techniques recently developed. Special attention is given to mapping and mosaicking techniques and to preliminary developments in range determination from stereo image pairs. The discussed image processing utilization areas include space, biomedical, and robotic applications.

  16. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo

    2014-07-01

    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  17. Images from the Mind: BCI image reconstruction based on Rapid Serial Visual Presentations of polygon primitives

    Directory of Open Access Journals (Sweden)

    Luís F Seoane

    2015-04-01

    Full Text Available We provide a proof of concept for an EEG-based reconstruction of a visual image which is on a user's mind. Our approach is based on the Rapid Serial Visual Presentation (RSVP of polygon primitives and Brain-Computer Interface (BCI technology. In an experimental setup, subjects were presented bursts of polygons: some of them contributed to building a target image (because they matched the shape and/or color of the target while some of them did not. The presentation of the contributing polygons triggered attention-related EEG patterns. These Event Related Potentials (ERPs could be determined using BCI classification and could be matched to the stimuli that elicited them. These stimuli (i.e. the ERP-correlated polygons were accumulated in the display until a satisfactory reconstruction of the target image was reached. As more polygons were accumulated, finer visual details were attained resulting in more challenging classification tasks. In our experiments, we observe an average classification accuracy of around 75%. An in-depth investigation suggests that many of the misclassifications were not misinterpretations of the BCI concerning the users' intent, but rather caused by ambiguous polygons that could contribute to reconstruct several different images. When we put our BCI-image reconstruction in perspective with other RSVP BCI paradigms, there is large room for improvement both in speed and accuracy. These results invite us to be optimistic. They open a plethora of possibilities to explore non-invasive BCIs for image reconstruction both in healthy and impaired subjects and, accordingly, suggest interesting recreational and clinical applications.

  18. Image quality dependence on image processing software in ...

    African Journals Online (AJOL)

    Background. Image post-processing gives computed radiography (CR) a considerable advantage over film-screen systems. After digitisation of information from CR plates, data are routinely processed using manufacturer-specific software. Agfa CR readers use MUSICA software, and an upgrade with significantly different ...

  19. Computational information geometry for image and signal processing

    CERN Document Server

    Critchley, Frank; Dodson, Christopher

    2017-01-01

    This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.

  20. Early Skin Tumor Detection from Microscopic Images through Image Processing

    Directory of Open Access Journals (Sweden)

    AYESHA AMIR SIDDIQI

    2017-10-01

    Full Text Available The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface that is generated for the algorithm makes the system user friendly

  1. Rotation Covariant Image Processing for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Henrik Skibbe

    2013-01-01

    Full Text Available With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  2. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.

    2018-01-09

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  3. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  4. Brain's tumor image processing using shearlet transform

    Science.gov (United States)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander

    2017-09-01

    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  5. Rapid determination of biogenic amines in cooked beef using hyperspectral imaging with sparse representation algorithm

    Science.gov (United States)

    Yang, Dong; Lu, Anxiang; Ren, Dong; Wang, Jihua

    2017-11-01

    This study explored the feasibility of rapid detection of biogenic amines (BAs) in cooked beef during the storage process using hyperspectral imaging technique combined with sparse representation (SR) algorithm. The hyperspectral images of samples were collected in the two spectral ranges of 400-1000 nm and 1000-1800 nm, separately. The spectral data were reduced dimensionality by SR and principal component analysis (PCA) algorithms, and then integrated the least square support vector machine (LS-SVM) to build the SR-LS-SVM and PC-LS-SVM models for the prediction of BAs values in cooked beef. The results showed that the SR-LS-SVM model exhibited the best predictive ability with determination coefficients (RP2) of 0.943 and root mean square errors (RMSEP) of 1.206 in the range of 400-1000 nm of prediction set. The SR and PCA algorithms were further combined to establish the best SR-PC-LS-SVM model for BAs prediction, which had high RP2of 0.969 and low RMSEP of 1.039 in the region of 400-1000 nm. The visual map of the BAs was generated using the best SR-PC-LS-SVM model with imaging process algorithms, which could be used to observe the changes of BAs in cooked beef more intuitively. The study demonstrated that hyperspectral imaging technique combined with sparse representation were able to detect effectively the BAs values in cooked beef during storage and the built SR-PC-LS-SVM model had a potential for rapid and accurate determination of freshness indexes in other meat and meat products.

  6. Quantification of chromatin condensation level by image processing.

    Science.gov (United States)

    Irianto, Jerome; Lee, David A; Knight, Martin M

    2014-03-01

    The level of chromatin condensation is related to the silencing/activation of chromosomal territories and therefore impacts on gene expression. Chromatin condensation changes during cell cycle, progression and differentiation, and is influenced by various physicochemical and epigenetic factors. This study describes a validated experimental technique to quantify chromatin condensation. A novel image processing procedure is developed using Sobel edge detection to quantify the level of chromatin condensation from nuclei images taken by confocal microscopy. The algorithm was developed in MATLAB and used to quantify different levels of chromatin condensation in chondrocyte nuclei achieved through alteration in osmotic pressure. The resulting chromatin condensation parameter (CCP) is in good agreement with independent multi-observer qualitative visual assessment. This image processing technique thereby provides a validated unbiased parameter for rapid and highly reproducible quantification of the level of chromatin condensation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Optical scatter imaging: a microscopic modality for the rapid morphological assay of living cells

    Science.gov (United States)

    Boustany, Nada N.

    2007-02-01

    Tumors derived from epithelial cells comprise the majority of human tumors and their growth results from the accumulation of multiple mutations affecting cellular processes critical for tissue homeostasis, including cell proliferation and cell death. To understand these processes and address the complexity of cancer cell function, multiple cellular responses to different experimental conditions and specific genetic mutations must be analyzed. Fundamental to this endeavor is the development of rapid cellular assays in genetically defined cells, and in particular, the development of optical imaging methods that allow dynamic observation and real-time monitoring of cellular processes. In this context, we are developing an optical scatter imaging technology that is intended to bridge the gap between light and electron microscopy by rapidly providing morphometric information about the relative size and shape of non-spherical organelles, with sub-wavelength resolution. Our goal is to complement current microscopy techniques used to study cells in-vitro, especially in long-term time-lapse studies of living cells, where exogenous labels can be toxic, and electron microscopy will destroy the sample. The optical measurements are based on Fourier spatial filtering in a standard microscope, and could ultimately be incorporated into existing high-throughput diagnostic platforms for cancer cell research and histopathology of neoplastic tissue arrays. Using an engineered epithelial cell model of tumor formation, we are currently studying how organelle structure and function are altered by defined genetic mutations affecting the propensity for cell death and oncogenic potential, and by environmental conditions promoting tumor growth. This talk will describe our optical scatter imaging technology and present results from our studies on apoptosis, and the function of BCL-2 family proteins.

  8. Intraoperative Molecular Imaging for Rapid Assessment of Tumor Margins

    Science.gov (United States)

    2011-09-01

    approach in animal models using a the MRI- FMT imaging system (Task 5). A description of the primary accomplishments and ongoing efforts follows...guided fluorescence molecular tomography ( FMT ) of ( ) ( ) ( ) tBP k NTNTT etCBP kRktCRtC + − ∗    + ++= 12121 2 1 9 two fluorescent probes in...administration, mice were imaged for an hour at approximately two minutes per frame using an MR-coupled FMT system. The imaging system is a spectrometer

  9. Rapid susceptibility testing and microcolony analysis of Candida spp. cultured and imaged on porous aluminum oxide.

    Science.gov (United States)

    Ingham, Colin J; Boonstra, Sjoukje; Levels, Suzanne; de Lange, Marit; Meis, Jacques F; Schneeberger, Peter M

    2012-01-01

    Acquired resistance to antifungal agents now supports the introduction of susceptibility testing for species-drug combinations for which this was previously thought unnecessary. For pathogenic yeasts, conventional phenotypic testing needs at least 24 h. Culture on a porous aluminum oxide (PAO) support combined with microscopy offers a route to more rapid results. Microcolonies of Candida species grown on PAO were stained with the fluorogenic dyes Fun-1 and Calcofluor White and then imaged by fluorescence microscopy. Images were captured by a charge-coupled device camera and processed by publicly available software. By this method, the growth of yeasts could be detected and quantified within 2 h. Microcolony imaging was then used to assess the susceptibility of the yeasts to amphotericin B, anidulafungin and caspofungin (3.5 h culture), and voriconazole and itraconazole (7 h culture). Overall, the results showed good agreement with EUCAST (86.5% agreement; n = 170) and E-test (85.9% agreement; n = 170). The closest agreement to standard tests was found when testing susceptibility to amphotericin B and echinocandins (88.2 to 91.2%) and the least good for the triazoles (79.4 to 82.4%). Furthermore, large datasets on population variation could be rapidly obtained. An analysis of microcolonies revealed subtle effects of antimycotics on resistant strains and below the MIC of sensitive strains, particularly an increase in population heterogeneity and cell density-dependent effects of triazoles. Additionally, the method could be adapted to strain identification via germ tube extension. We suggest PAO culture is a rapid and versatile method that may be usefully adapted to clinical mycology and has research applications.

  10. Rapid susceptibility testing and microcolony analysis of Candida spp. cultured and imaged on porous aluminum oxide.

    Directory of Open Access Journals (Sweden)

    Colin J Ingham

    Full Text Available BACKGROUND: Acquired resistance to antifungal agents now supports the introduction of susceptibility testing for species-drug combinations for which this was previously thought unnecessary. For pathogenic yeasts, conventional phenotypic testing needs at least 24 h. Culture on a porous aluminum oxide (PAO support combined with microscopy offers a route to more rapid results. METHODS: Microcolonies of Candida species grown on PAO were stained with the fluorogenic dyes Fun-1 and Calcofluor White and then imaged by fluorescence microscopy. Images were captured by a charge-coupled device camera and processed by publicly available software. By this method, the growth of yeasts could be detected and quantified within 2 h. Microcolony imaging was then used to assess the susceptibility of the yeasts to amphotericin B, anidulafungin and caspofungin (3.5 h culture, and voriconazole and itraconazole (7 h culture. SIGNIFICANCE: Overall, the results showed good agreement with EUCAST (86.5% agreement; n = 170 and E-test (85.9% agreement; n = 170. The closest agreement to standard tests was found when testing susceptibility to amphotericin B and echinocandins (88.2 to 91.2% and the least good for the triazoles (79.4 to 82.4%. Furthermore, large datasets on population variation could be rapidly obtained. An analysis of microcolonies revealed subtle effects of antimycotics on resistant strains and below the MIC of sensitive strains, particularly an increase in population heterogeneity and cell density-dependent effects of triazoles. Additionally, the method could be adapted to strain identification via germ tube extension. We suggest PAO culture is a rapid and versatile method that may be usefully adapted to clinical mycology and has research applications.

  11. Rapid Acquisition Imaging Spectrograph (RAISE) Renewal Proposal Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The optical design of RAISE is based on a new class of UV/EUV imaging spectrometers that use  only two reflections to provide quasi-stigmatic performance...

  12. Muscle fiber diameter assessment in cleft lip using image processing.

    Science.gov (United States)

    Khan, M F J; Little, J; Abelli, L; Mossey, P A; Autelitano, L; Nag, T C; Rubini, M

    2017-10-04

    To pilot investigation of muscle fiber diameter (MFD) on medial and lateral sides of the cleft in 18 infants with cleft lip with or without cleft palate (CL/P) using image processing. Formalin-fixed paraffin-embedded (FFPE) tissue samples from the medial and lateral sides of the cleft were analyzed for MFD using an image-processing program (ImageJ). For within-case comparison, a paired Student's t test was performed. For comparisons between classes, an unpaired t test was used. Image processing enabled rapid measurement of MFD with majority of fibers showing diameter between 6 and 11 μm. There was no significant difference in mean MFD between the medial and lateral sides, or between CL and CLP. However, we found a significant difference on the medial side (p = .032) between males and females. The image processing on FFPE tissues resulted in easy quantification of MFD with finding of a smaller MFD on the medial side in males suggesting possible differences in orbicularis oris (OO) muscle between the two sexes in CL that warrants replication using larger number of cases. Moreover, this finding can aid subclinical phenotyping and potentially in the restoration of the anatomy and function of the upper lip. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd. All rights reserved.

  13. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  14. Digital-image processing and image analysis of glacier ice

    Science.gov (United States)

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  15. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A brief review of digital image processing

    Science.gov (United States)

    Billingsley, F. C.

    1975-01-01

    The review is presented with particular reference to Skylab S-192 and Landsat MSS imagery. Attention is given to rectification (calibration) processing with emphasis on geometric correction of image distortions. Image enhancement techniques (e.g., the use of high pass digital filters to eliminate gross shading to allow emphasis of the fine detail) are described along with data analysis and system considerations (software philosophy).

  17. PCB Fault Detection Using Image Processing

    Science.gov (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.

    2017-08-01

    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  18. JMorph: Software for performing rapid morphometric measurements on digital images of fossil assemblages

    Science.gov (United States)

    Lelièvre, Peter G.; Grey, Melissa

    2017-08-01

    Quantitative morphometric analyses of form are widely used in palaeontology, especially for taxonomic and evolutionary research. These analyses can involve several measurements performed on hundreds or even thousands of samples. Performing measurements of size and shape on large assemblages of macro- or microfossil samples is generally infeasible or impossible with traditional instruments such as vernier calipers. Instead, digital image processing software is required to perform measurements via suitable digital images of samples. Many software packages exist for morphometric analyses but there is not much available for the integral stage of data collection, particularly for the measurement of the outlines of samples. Some software exists to automatically detect the outline of a fossil sample from a digital image. However, automatic outline detection methods may perform inadequately when samples have incomplete outlines or images contain poor contrast between the sample and staging background. Hence, a manual digitization approach may be the only option. We are not aware of any software packages that are designed specifically for efficient digital measurement of fossil assemblages with numerous samples, especially for the purposes of manual outline analysis. Throughout several previous studies, we have developed a new software tool, JMorph, that is custom-built for that task. JMorph provides the means to perform many different types of measurements, which we describe in this manuscript. We focus on JMorph's ability to rapidly and accurately digitize the outlines of fossils. JMorph is freely available from the authors.

  19. Iterative elimination algorithm for thermal image processing

    Directory of Open Access Journals (Sweden)

    A. H. Alkali

    2014-08-01

    Full Text Available Segmentation is employed in everyday image processing, in order to remove unwanted objects present in the image. There are scenarios where segmentation alone does not do the intended job automatically. In such cases, subjective means are required to eliminate the remnants which are time consuming especially when multiple images are involved. It is also not feasible when real-time applications are involved. This is even compounded when thermal imaging is involved as both foreground and background objects can have similar thermal distribution, thus making it impossible for straight segmentation to distinguish between the two. In this study, a real-time Iterative Elimination Algorithm (IEA was developed and it was shown that false foreground was removed in thermal images where segmentation failed to do so. The algorithm was tested on thermal images that were segmented using the inter-variance thresholding. The thermal images contained human subjects as foreground with some background objects having similar thermal distribution as the subject. Informed consent was obtained from the subject that voluntarily took part in the study. The IEA was only tested on thermal images and failed when false background object was connected to the foreground after segmentation.

  20. Support Routines for In Situ Image Processing

    Science.gov (United States)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  1. Technical Note: Rapid prototyping of 3D grid arrays for image guided therapy quality assurance

    Energy Technology Data Exchange (ETDEWEB)

    Kittle, David; Holshouser, Barbara; Slater, James M.; Guenther, Bob D.; Pitsianis, Nikos P.; Pearlstein, Robert D. [Department of Radiation Medicine, Epilepsy Radiosurgery Research Program, Loma Linda University, Loma Linda, California 92354 (United States); Department of Radiology, Loma Linda University Medical Center, Loma Linda, California 92354 (United States); Department of Radiation Medicine, Loma Linda University, Loma Linda, California 92354 (United States); Department of Physics, Duke University, Durham, North Carolina 27708 (United States); Department of Electrical and Computer Engineering and Department of Computer Science, Duke University, Durham, North Carolina 27708 (United States); Department of Radiation Medicine, Epilepsy Radiosurgery Research Program, Loma Linda University, Loma Linda, California 92354 and Department of Surgery-Neurosurgery, Duke University and Medical Center, Durham, North Carolina 27710 (United States)

    2008-12-15

    Three dimensional grid phantoms offer a number of advantages for measuring imaging related spatial inaccuracies for image guided surgery and radiotherapy. The authors examined the use of rapid prototyping technology for directly fabricating 3D grid phantoms from CAD drawings. We tested three different fabrication process materials, photopolymer jet with acrylic resin (PJ/AR), selective laser sintering with polyamide (SLS/P), and fused deposition modeling with acrylonitrile butadiene styrene (FDM/ABS). The test objects consisted of rectangular arrays of control points formed by the intersections of posts and struts (2 mm rectangular cross section) and spaced 8 mm apart in the x, y, and z directions. The PJ/AR phantom expanded after immersion in water which resulted in permanent warping of the structure. The surface of the FDM/ABS grid exhibited a regular pattern of depressions and ridges from the extrusion process. SLS/P showed the best combination of build accuracy, surface finish, and stability. Based on these findings, a grid phantom for assessing machine-dependent and frame-induced MR spatial distortions was fabricated to be used for quality assurance in stereotactic neurosurgical and radiotherapy procedures. The spatial uniformity of the SLS/P grid control point array was determined by CT imaging (0.6x0.6x0.625 mm{sup 3} resolution) and found suitable for the application, with over 97.5% of the control points located within 0.3 mm of the position specified in CAD drawing and none of the points off by more than 0.4 mm. Rapid prototyping is a flexible and cost effective alternative for development of customized grid phantoms for medical physics quality assurance.

  2. A representation for mammographic image processing.

    Science.gov (United States)

    Highnam, R; Brady, M; Shepstone, B

    1996-03-01

    Mammographic image analysis is typically performed using standard, general-purpose algorithms. We note the dangers of this approach and show that an alternative physics-model-based approach can be developed to calibrate the mammographic imaging process. This enables us to obtain, at each pixel, a quantitative measure of the breast tissue. The measure we use is h(int) and this represents the thickness of 'interesting' (non-fat) tissue between the pixel and the X-ray source. The thicknesses over the image constitute what we term the h(int) representation, and it can most usefully be regarded as a surface that conveys information about the anatomy of the breast. The representation allows image enhancement through removing the effects of degrading factors, and also effective image normalization since all changes in the image due to variations in the imaging conditions have been removed. Furthermore, the h(int) representation gives us a basis upon which to build object models and to reason about breast anatomy. We use this ability to choose features that are robust to breast compression and variations in breast composition. In this paper we describe the h(int) representation, show how it can be computed, and then illustrate how it can be applied to a variety of mammographic image processing tasks. The breast thickness turns out to be a key parameter in the computation of h(int), but it is not normally recorded. We show how the breast thickness can be estimated from an image, and examine the sensitivity of h(int) to this estimate. We then show how we can simulate any projective X-ray examination and can simulate the appearance of anatomical structures within the breast. We follow this with a comparison between the h(int) representation and conventional representations with respect to invariance to imaging conditions and the surrounding tissue. Initial results indicate that image analysis is far more robust when specific consideration is taken of the imaging process and

  3. Dictionary of computer vision and image processing

    CERN Document Server

    Fisher, Robert B; Dawson-Howe, Kenneth; Fitzgibbon, Andrew; Robertson, Craig; Trucco, Emanuele; Williams, Christopher K I

    2013-01-01

    Written by leading researchers, the 2nd Edition of the Dictionary of Computer Vision & Image Processing is a comprehensive and reliable resource which now provides explanations of over 3500 of the most commonly used terms across image processing, computer vision and related fields including machine vision. It offers clear and concise definitions with short examples or mathematical precision where necessary for clarity that ultimately makes it a very usable reference for new entrants to these fields at senior undergraduate and graduate level, through to early career researchers to help build u

  4. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  5. Intraoperative Molecular Imaging for Rapid Assessment of Tumor Margins

    Science.gov (United States)

    2010-09-01

    determine the depth of embedded tumor fragments in the excise tissue or surgical cavity. Pilot animal data with the Licor IRDye800CW-2DG imaging agent in...and the tumor immediately removed with adjacent normal tissue. All tissue were soaked for 5 – 20 minutes in a solution of Licor IRDye800CW-2DG and... Licor IRDye800CW-2DG, showing a cross section of the tumor mass. These images show a significant amount of non-specific uptake of the probe in

  6. Analysis of Rapid Acquisition Processes to Fulfill Future Urgent Needs

    Science.gov (United States)

    2015-12-01

    UAVs , the DOD defines UAVs as “powered aerial vehicle that does not carry a human operator; use aerodynamic forces to provide lift; can be autonomously...25 B. UAV ...25 1. UAV Background ...............................................................................25 2. Rapid Acquisition in UAV

  7. Monitoring of rapid sand filters using an acoustic imaging technique

    NARCIS (Netherlands)

    Allouche, N.; Simons, D.G.; Rietveld, L.C.

    2012-01-01

    A novel instrument is developed to acoustically image sand filters used for water treatment and monitor their performance. The instrument consists of an omnidirectional transmitter that generates a chirp with a frequency range between 10 and 110 kHz, and an array of hydrophones. The instrument was

  8. Rapid Tools Compensation in Sheet Metal Stamping Process

    OpenAIRE

    Iorio Lorenzo; Strano Matteo; Monno Michele

    2016-01-01

    The sudden growth of additive manufacturing is generating a renovated interest towards the field of rapid tooling. We propose a geometrical compensation method for rapid tools made by thermoset polyurethane. The method is based on the explicit FEM simulation coupled to a geometrical optimization algorithm for designing the stamping tools. The compensation algorithm is enhanced by considering the deviations between the stamped and designed components. The FEM model validation has been performe...

  9. Rapid global fitting of large fluorescence lifetime imaging microscopy datasets.

    Directory of Open Access Journals (Sweden)

    Sean C Warren

    Full Text Available Fluorescence lifetime imaging (FLIM is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset. This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis

  10. Rapid global fitting of large fluorescence lifetime imaging microscopy datasets.

    Science.gov (United States)

    Warren, Sean C; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda; Dunsby, Chris; French, Paul M W

    2013-01-01

    Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell

  11. Direct Parametric Image Reconstruction in Reduced Parameter Space for Rapid Multi-Tracer PET Imaging.

    Science.gov (United States)

    Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu

    2015-02-12

    The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.

  12. Processing Images of Craters for Spacecraft Navigation

    Science.gov (United States)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  13. Hardware implementation of machine vision systems: image and video processing

    Science.gov (United States)

    Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe

    2013-12-01

    This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

  14. Onboard Image Processing System for Hyperspectral Sensor.

    Science.gov (United States)

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-09-25

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.

  15. Simplified labeling process for medical image segmentation.

    Science.gov (United States)

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms.

  16. Conceptualization, Cognitive Process between Image and Word

    Directory of Open Access Journals (Sweden)

    Aurel Ion Clinciu

    2009-12-01

    Full Text Available The study explores the process of constituting and organizing the system of concepts. After a comparative analysis of image and concept, conceptualization is reconsidered through raising for discussion the relations of concept with image in general and with self-image mirrored in body schema in particular. Taking into consideration the notion of mental space, there is developed an articulated perspective on conceptualization which has the images of mental space at one pole and the categories of language and operations of thinking at the other pole. There are explored the explicative possibilities of the notion of Tversky’s diagrammatic space as an element which is necessary to understand the genesis of graphic behaviour and to define a new construct, graphic intelligence.

  17. Digital image processing of vascular angiograms

    Science.gov (United States)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  18. Speckle pattern processing by digital image correlation

    Directory of Open Access Journals (Sweden)

    Gubarev Fedor

    2016-01-01

    Full Text Available Testing the method of speckle pattern processing based on the digital image correlation is carried out in the current work. Three the most widely used formulas of the correlation coefficient are tested. To determine the accuracy of the speckle pattern processing, test speckle patterns with known displacement are used. The optimal size of a speckle pattern template used for determination of correlation and corresponding the speckle pattern displacement is also considered in the work.

  19. Is the rapid adaptation paradigm too rapid? Implications for face and object processing.

    Science.gov (United States)

    Nemrodov, Dan; Itier, Roxane J

    2012-07-16

    Rapid adaptation is an adaptation procedure in which adaptors and test stimuli are presented in rapid succession. The current study tested the validity of this method for early ERP components by investigating the specificity of the adaptation effect on the face-sensitive N170 ERP component across multiple test stimuli. Experiments 1 and 2 showed identical response patterns for house and upright face test stimuli using the same adaptor stimuli. The results were also identical to those reported in a previous study using inverted face test stimuli (Nemrodov and Itier, 2011). In Experiment 3 all possible adaptor-test combinations between upright face, house, chair and car stimuli were used and no interaction between adaptor and test category, expected in the case of test-specific adaptation, was found. These results demonstrate that the rapid adaptation paradigm does not produce category-specific adaptation effects around 170-200 ms following test stimulus onset, a necessary condition for the interpretation of adaptation results. These results suggest the rapid categorical adaptation paradigm does not work. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Optimisation in signal and image processing

    CERN Document Server

    Siarry, Patrick

    2010-01-01

    This book describes the optimization methods most commonly encountered in signal and image processing: artificial evolution and Parisian approach; wavelets and fractals; information criteria; training and quadratic programming; Bayesian formalism; probabilistic modeling; Markovian approach; hidden Markov models; and metaheuristics (genetic algorithms, ant colony algorithms, cross-entropy, particle swarm optimization, estimation of distribution algorithms, and artificial immune systems).

  1. Image Processing in Amateur Astro-Photography

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 2. Image Processing in Amateur Astro-Photography. Anurag Garg. Classroom Volume 15 Issue 2 February 2010 pp 170-175. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/015/02/0170-0175 ...

  2. Stochastic processes, estimation theory and image enhancement

    Science.gov (United States)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  3. Rapid HIFU autofocusing using the entire MR-ARFI image

    Energy Technology Data Exchange (ETDEWEB)

    Grissom, William A.; Kaye, Elena; Pauly, Kim Butts; Zur, Yuval; Yeo, Desmond; Medan, Yoav; Davis, Cynthia [Biomedical Engineering, Vanderbilt University, Nashville, Tennessee (United States); Electrical Engineering, Stanford University, Stanford, California (United States); Radiology, Stanford University, Stanford, California (United States); GE Healthcare, Haifa (Israel); GE Global Research, Niskayuna, New York (United States); Biomedical Engineering, Technion IIT, Haifa (Israel); GE Global Research, Niskayuna, New York (United States)

    2012-11-28

    Phase aberrations and attenuations caused by bone can defocus HIFU in the brain and organs behind the ribcage. To refocus the beam, MR-ARFI can be used to measure tissue displacements created by each element in the transducer, and optimize driving signal delays and amplitudes. We introduce a new MR-ARFI-based autofocusing method that requires many fewer image acquisitions than current methods. The method is validated in simulations of bone and brain HIFU transducers, and compared to a conventional method.

  4. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  5. Rapid microwave-assisted synthesis of dextran-coated iron oxide nanoparticles for magnetic resonance imaging

    Science.gov (United States)

    Osborne, Elizabeth A.; Atkins, Tonya M.; Gilbert, Dustin A.; Kauzlarich, Susan M.; Liu, Kai; Louie, Angelique Y.

    2012-06-01

    Currently, magnetic iron oxide nanoparticles are the only nanosized magnetic resonance imaging (MRI) contrast agents approved for clinical use, yet commercial manufacturing of these agents has been limited or discontinued. Though there is still widespread demand for these particles both for clinical use and research, they are difficult to obtain commercially, and complicated syntheses make in-house preparation unfeasible for most biological research labs or clinics. To make commercial production viable and increase accessibility of these products, it is crucial to develop simple, rapid and reproducible preparations of biocompatible iron oxide nanoparticles. Here, we report a rapid, straightforward microwave-assisted synthesis of superparamagnetic dextran-coated iron oxide nanoparticles. The nanoparticles were produced in two hydrodynamic sizes with differing core morphologies by varying the synthetic method as either a two-step or single-step process. A striking benefit of these methods is the ability to obtain swift and consistent results without the necessity for air-, pH- or temperature-sensitive techniques; therefore, reaction times and complex manufacturing processes are greatly reduced as compared to conventional synthetic methods. This is a great benefit for cost-effective translation to commercial production. The nanoparticles are found to be superparamagnetic and exhibit properties consistent for use in MRI. In addition, the dextran coating imparts the water solubility and biocompatibility necessary for in vivo utilization.

  6. Rapid Tools Compensation in Sheet Metal Stamping Process

    Directory of Open Access Journals (Sweden)

    Iorio Lorenzo

    2016-01-01

    Full Text Available The sudden growth of additive manufacturing is generating a renovated interest towards the field of rapid tooling. We propose a geometrical compensation method for rapid tools made by thermoset polyurethane. The method is based on the explicit FEM simulation coupled to a geometrical optimization algorithm for designing the stamping tools. The compensation algorithm is enhanced by considering the deviations between the stamped and designed components. The FEM model validation has been performed by comparing the results of a DOE done at different values of press force.

  7. Rapid interferometric imaging of printed drug laden multilayer structures

    DEFF Research Database (Denmark)

    Sandler, Niklas; Kassamakov, Ivan; Ehlers, Henrik

    2014-01-01

    /and active pharmaceutical ingredients (API) adhere to each other. This is crucial in order to have predetermined drug release profiles. We also demonstrate non-invasive measurement of a polymer structure in a microfluidic channel. It shown that traceable interferometric 3D microscopy is a viable technique......The developments in printing technologies allow fabrication of micron-size nano-layered delivery systems to personal specifications. In this study we fabricated layered polymer structures for drug-delivery into a microfluidic channel and aimed to interferometrically assure their topography...... and adherence to each other. We present a scanning white light interferometer (SWLI) method for quantitative assurance of the topography of the embedded structure. We determined rapidly in non-destructive manner the thickness and roughness of the structures and whether the printed layers containing polymers or...

  8. Rapid Prototyping of High Performance Signal Processing Applications

    Science.gov (United States)

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high

  9. Rapid identification of heterogeneous mixture components with hyperspectral coherent anti-Stokes Raman scattering imaging

    NARCIS (Netherlands)

    Garbacik, E.T.; Herek, Jennifer Lynn; Otto, Cornelis; Offerhaus, Herman L.

    2012-01-01

    For the rapid analysis of complicated heterogeneous mixtures, we have developed a method to acquire and intuitively display hyperspectral coherent anti-Stokes Raman scattering (CARS) images. The imaging is performed with a conventional optical setup based around an optical parametric oscillator.

  10. Subband/transform functions for image processing

    Science.gov (United States)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  11. [Digital thoracic radiology: devices, image processing, limits].

    Science.gov (United States)

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  12. Rapid Delivery of Cyber Capabilities: Evaluation of the Requirement for a Rapid Cyber Acquisition Process

    Science.gov (United States)

    2012-06-01

    capabilities and rapid implementation of tactics—all aspects readily addressed using non-material solutions. Additionally, configuration and maintenance of...Offict Symbol: 4. Co~ttac-t 1nror11tation: (l\\’am~ Pj Rcqu.-$1/.Uomtlt’nttNU POC) (RD.11J.!Crrult’) (0/ Jit ’’ Spn) Phone’ 1\\’un.rbo S. T ec:hni<’lll POC 6

  13. Driver drowsiness detection using ANN image processing

    Science.gov (United States)

    Vesselenyi, T.; Moca, S.; Rus, A.; Mitran, T.; Tătaru, B.

    2017-10-01

    The paper presents a study regarding the possibility to develop a drowsiness detection system for car drivers based on three types of methods: EEG and EOG signal processing and driver image analysis. In previous works the authors have described the researches on the first two methods. In this paper the authors have studied the possibility to detect the drowsy or alert state of the driver based on the images taken during driving and by analyzing the state of the driver’s eyes: opened, half-opened and closed. For this purpose two kinds of artificial neural networks were employed: a 1 hidden layer network and an autoencoder network.

  14. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    OpenAIRE

    Gilani, Syed Irtiza Ali

    2008-01-01

    Recent technological developments in the field of magnetic resonance imaging have resulted in advanced techniques that can reduce the total time to acquire images. For applications such as relaxation time mapping, which enables improved visualisation of in vivo structures, rapid imaging techniques are highly desirable. TAPIR is a Look- Locker-based sequence for high-resolution, multislice T1 relaxation time mapping. Despite the high accuracy and precision of TAPIR, an improveme...

  15. Illuminating magma shearing processes via synchrotron imaging

    Science.gov (United States)

    Lavallée, Yan; Cai, Biao; Coats, Rebecca; Kendrick, Jackie E.; von Aulock, Felix W.; Wallace, Paul A.; Le Gall, Nolwenn; Godinho, Jose; Dobson, Katherine; Atwood, Robert; Holness, Marian; Lee, Peter D.

    2017-04-01

    Our understanding of geomaterial behaviour and processes has long fallen short due to inaccessibility into material as "something" happens. In volcanology, research strategies have increasingly sought to illuminate the subsurface of materials at all scales, from the use of muon tomography to image the inside of volcanoes to the use of seismic tomography to image magmatic bodies in the crust, and most recently, we have added synchrotron-based x-ray tomography to image the inside of material as we test it under controlled conditions. Here, we will explore some of the novel findings made on the evolution of magma during shearing. These will include observations and discussions of magma flow and failure as well as petrological reaction kinetics.

  16. Advances in iterative multigrid PIV image processing

    Science.gov (United States)

    Scarano, F.; Riethmuller, M. L.

    2000-12-01

    An image-processing technique is proposed, which performs iterative interrogation of particle image velocimetry (PIV) recordings. The method is based on cross-correlation, enhancing the matching performances by means of a relative transformation between the interrogation areas. On the basis of an iterative prediction of the tracers motion, window offset and deformation are applied, accounting for the local deformation of the fluid continuum. In addition, progressive grid refinement is applied in order to maximise the spatial resolution. The performances of the method are analysed and compared with the conventional cross correlation with and without the effect of a window discrete offset. The assessment of performance through synthetic PIV images shows that a remarkable improvement can be obtained in terms of precision and dynamic range. Moreover, peak-locking effects do not affect the method in practice. The velocity gradient range accessed with the application of a relative window deformation (linear approximation) is significantly enlarged, as confirmed in the experimental results.

  17. Automatic image analysis of multicellular apoptosis process.

    Science.gov (United States)

    Ziraldo, Riccardo; Link, Nichole; Abrams, John; Ma, Lan

    2014-01-01

    Apoptotic programmed cell death (PCD) is a common and fundamental aspect of developmental maturation. Image processing techniques have been developed to detect apoptosis at the single-cell level in a single still image, while an efficient algorithm to automatically analyze the temporal progression of apoptosis in a large population of cells is unavailable. In this work, we have developed an ImageJ-based program that can quantitatively analyze time-lapse microscopy movies of live tissues undergoing apoptosis with a fluorescent cellular marker, and subsequently extract the temporospatial pattern of multicellular response. The protocol is applied to characterize apoptosis of Drosophila wing epithelium cells at eclosion. Using natural anatomic structures as reference, we identify dynamic patterns in the progression of apoptosis within the wing tissue, which not only confirms the previously observed collective cell behavior from a quantitative perspective for the first time, but also reveals a plausible role played by the anatomic structures in Drosophila apoptosis.

  18. Rapid imaging of free radicals in vivo using field cycled PEDRI.

    Science.gov (United States)

    Puwanich, P; Lurie, D J; Foster, M A

    1999-12-01

    Imaging of free radicals in vivo using an interleaved field-cycled proton-electron double-resonance imaging (FC-PEDRI) pulse sequence has recently been investigated. In this work, in order to reduce the EPR (electron paramagnetic resonance) irradiation power required and the imaging time, a centric reordered snapshot FC-PEDRI pulse sequence has been implemented. This is based on the FLASH pulse sequence with a very short repetition time and the use of centric reordering of the phase-encoding gradient, allowing the most significant free induction decay (FID) signals to be collected before the signal enhancement decays significantly. A new technique of signal phaseshift correction was required to eliminate ghost artefacts caused by the instability of the main magnetic field after field cycling. An FID amplitude correction scheme has also been implemented to reduce edge enhancement artefacts caused by the rapid change of magnetization population before reaching the steady state. Using the rapid pulse sequence, the time required for acquisition of a 64 x 64 pixel FC-PEDRI image was reduced to 6 s per image compared with about 2.5 min with the conventional pulse sequence. The EPR irradiation power applied to the sample was reduced by a factor of approximately 64. Although the resulting images obtained by the rapid pulse sequence have a lower signal to noise than those obtained by a normal interleaved FC-PEDRI pulse sequence, the results show that rapid imaging of free radicals in vivo using snapshot FC-PEDRI is possible.

  19. Improving the Acute Myocardial Infarction Rapid Rule Out process.

    Science.gov (United States)

    Hyden, Rachel; Fields, Willa

    2010-01-01

    Bedside staff nurses are in a unique position to identify implementation problems and ways to improve compliance with evidence-based practice guidelines. The goal of this performance improvement project was to improve compliance with an evidence-based Acute Myocardial Infarction Rapid Rule Out pathway. The purpose of the article is to demonstrate how a bedside staff nurse was able to decrease wait times and length of stay for patients with low-risk chest pain while applying evidence-based practice.

  20. Sorting Olive Batches for the Milling Process Using Image Processing.

    Science.gov (United States)

    Aguilera Puerto, Daniel; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-07-02

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results.

  1. Leveraging Gaussian process approximations for rapid image overlay production

    CSIR Research Space (South Africa)

    Burke, Michael

    2017-10-01

    Full Text Available ]. These visualisation tools can often be used to generate model-speciic saliency maps. For example, sensitivity analysis visualisation strategies have been proposed for classiication models [3]. These approaches at- tempt to determine howmuch a pixel needs to be changed... tomodify a predicted classiication label. An alternative visualisation strategy relies on layer-wise relevance propagation [2]. Here, a relevance score is assigned to each layer of a machine learning model and these relevance scores are propagated through...

  2. Implementation of rapid imaging system on the COMPASS tokamak.

    Czech Academy of Sciences Publication Activity Database

    Havránek, Aleš; Weinzettl, Vladimír; Fridrich, David; Cavalier, Jordan; Urban, Jakub; Komm, Michael

    2017-01-01

    Roč. 123, November (2017), s. 857-860 ISSN 0920-3796. [SOFT 2016: Symposium on Fusion Technology /29./. Prague, 05.09.2016-09.09.2016] R&D Projects: GA MŠk(CZ) LM2015045 Institutional support: RVO:61389021 Keywords : Camera * Data acquisition * Video processing * Tokamak Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.319, year: 2016 http://www.sciencedirect.com/science/article/pii/S092037961730354X

  3. Processing images with programming language Halide

    OpenAIRE

    DUKIČ, ROK

    2017-01-01

    The thesis contains a presentation of a recently created programming language Halide and its comparison to an already established image processing library OpenCV. We compare the execution times of the implementations with the same functionality and their length (in terms of number of lines). The implementations consist of morphological operations and template matching. Operations are implemented in four versions. The first version is made in C++ and only uses OpenCV’s objects. The second ...

  4. Imaging Spectroscopy Techniques for Rapid Assessment of Geologic and Cryospheric Science Data from future Satellite Sensors

    Science.gov (United States)

    Calvin, W. M.; Hill, R.

    2016-12-01

    Several efforts are currently underway to develop and launch the next generation of imaging spectrometer systems on satellite platforms for a wide range of Earth Observation goals. Systems that include the reflected solar wavelength range up to 2.5 μm will be capable of detailed mapping of the composition of the Earth's surface. Sensors under development include EnMAP, HISUI, PRISMA, HERO, and HyspIRI. These systems are expected to be able to provide global data for insights and constraints on fundamental geological processes, natural and anthropogenic hazards, water, energy and mineral resource assessments. Coupled with the development of these sensors is the challenge of bringing a multi-channel user community (from Landsat, MODIS, and ASTER) into the rich science return available from imaging spectrometer systems. Most data end users will never be spectroscopy experts so that making the derived science products accessible to a wide user community is imperative. Simple band parameterizations have been developed for the CRISM instrument at Mars, including mafic and alteration minerals, frost and volatile ice indices. These products enhance and augment the use of that data set by broader group of scientists. Summary products for terrestrial geologic and water resource applications would help build a wider user base for future satellite systems, and rapidly key spectral experts to important regions for detailed spectral mapping. Summary products take advantage of imaging spectroscopy's narrow spectral channels with band depth calculations in addition to band ratios that are commonly used by multi-channel systems (e.g. NDVI, NDWI, NDSI). We are testing summary products for Earth geologic and snow scenes over California using AVIRIS data at 18m/pixel. This has resulted in several algorithms for rapid mineral discrimination and mapping and data collects over the melting Sierra snowpack in spring 2016 are expected to generate algorithms for snow grain size and surface

  5. Digital image processing for information extraction.

    Science.gov (United States)

    Billingsley, F. C.

    1973-01-01

    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  6. Phase Superposition Processing for Ultrasonic Imaging

    Science.gov (United States)

    Tao, L.; Ma, X. R.; Tian, H.; Guo, Z. X.

    1996-06-01

    In order to improve the resolution of defect reconstruction for non-destructive evaluation, a new phase superposition processing (PSP) method has been developed on the basis of a synthetic aperture focusing technique (SAFT). The proposed method synthesizes the magnitudes of phase-superposed delayed signal groups. A satisfactory image can be obtained by a simple algorithm processing time domain radio frequency signals directly. In this paper, the theory of PSP is introduced and some simulation and experimental results illustrating the advantage of PSP are given.

  7. Drell-Yan process at forward rapidity at the LHC

    OpenAIRE

    Golec-Biernat, Krzysztof; Lewandowska, Emilia(Institute of Nuclear Physics Polish Academy of Sciences, Cracow, 31-342, Poland); Stasto, Anna M.

    2010-01-01

    We analyze the Drell-Yan lepton pair production at forward rapidity at the Large Hadron Collider. Using the dipole framework for the computation of the cross section we find a significant suppression in comparison to the collinear factorization formula due to saturation effects in the dipole cross section. We develop a twist expansion in powers of Q_s^2/M^2 where Q_s is the saturation scale and M the invariant mass of the produced lepton pair. For the nominal LHC energy the leading twist desc...

  8. Rapid Sterilization of Escherichia coli by Solution Plasma Process

    Science.gov (United States)

    Andreeva, Nina; Ishizaki, Takahiro; Baroch, Pavel; Saito, Nagahiro

    2012-12-01

    Solution plasma (SP), which is a discharge in the liquid phase, has the potential for rapid sterilization of water without chemical agents. The discharge showed a strong sterilization performance against Escherichia coli bacteria. The decimal value (D value) of the reduction time for E. coli by this system with an electrode distance of 1.0 mm was estimated to be approximately 1.0 min. Our discharge system in the liquid phase caused no physical damage to the E. coli and only a small increase in the temperature of the aqueous solution. The UV light generated by the discharge was an important factor in the sterilization of E. coli.

  9. Model control of image processing for telerobotics and biomedical instrumentation

    Science.gov (United States)

    Nguyen, An Huu

    1993-06-01

    This thesis has model control of image processing (MCIP) as its major theme. By this it is meant that there is a top-down model approach which already knows the structure of the image to be processed. This top-down image processing under model control is used further as visual feedback to control robots and as feedforward information for biomedical instrumentation. The software engineering of the bioengineering instrumentation image processing is defined in terms of the task and the tools available. Early bottom-up image processing such as thresholding occurs only within the top-down control regions of interest (ROI's) or operating windows. Moment computation is an important bottom-up procedure as well as pyramiding to attain rapid computation, among other considerations in attaining programming efficiencies. A distinction is made between initialization procedures and stripped down run time operations. Even more detailed engineering design considerations are considered with respect to the ellipsoidal modeling of objects. Here the major axis orientation is an important additional piece of information, beyond the centroid moments. Careful analysis of various sources of errors and considerable benchmarking characterized the detailed considerations of the software engineering of the image processing procedures. Image processing for robotic control involves a great deal of 3D calibration of the robot working environment (RWE). Of special interest is the idea of adapting the machine scanpath to the current task. It was important to pay careful attention to the hardware aspects of the control of the toy robots that were used to demonstrate the general methodology. It was necessary to precalibrate the open loop gains for all motors so that after initialization the visual feedback, which depends on MCIP, would be able to supply enough information quickly enough to the control algorithms to govern the robots under a variety of control configurations and task operations

  10. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)

    2016-10-15

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  11. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    Science.gov (United States)

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing can

  12. Rapid detection of parasite in muscle fibers of fishes using a portable microscope imaging technique (Conference Presentation)

    Science.gov (United States)

    Lee, Jayoung; Lee, Hoonsoo; Kim, Moon S.; Cho, Byoungkwan

    2017-05-01

    Fishes are a widely used food material in the world. Recently about 4% of the fishes are infected with Kudoa thyrsites in Asian ocean. Kudoa thyrsites is a parasite that is found within the muscle fibers of fishes. The infected fishes can be a reason of food poisoning, which should be sorted out before distribution and consumption. Although Kudoa thyrsites is visible to the naked eye, it could be easily overlooked due to the micro-scale size and similar color with fish tissue. In addition, the visual inspection is labor intensive works resulting in loss of money and time. In this study, a portable microscopic camera was utilized to obtain images of raw fish slices. The optimized image processing techniques with polarized transmittance images provided reliable performance. The result shows that the portable microscopic imaging method can be used to detect parasites rapidly and non-destructively, which could be an alternative to manual inspections.

  13. Rapid biocompatibility analysis of materials via in vivo fluorescence imaging of mouse models.

    Directory of Open Access Journals (Sweden)

    Kaitlin M Bratlie

    Full Text Available BACKGROUND: Many materials are unsuitable for medical use because of poor biocompatibility. Recently, advances in the high throughput synthesis of biomaterials has significantly increased the number of potential biomaterials, however current biocompatibility analysis methods are slow and require histological analysis. METHODOLOGY/PRINCIPAL FINDINGS: Here we develop rapid, non-invasive methods for in vivo quantification of the inflammatory response to implanted biomaterials. Materials were placed subcutaneously in an array format and monitored for host responses as per ISO 10993-6: 2001. Host cell activity in response to these materials was imaged kinetically, in vivo using fluorescent whole animal imaging. Data captured using whole animal imaging displayed similar temporal trends in cellular recruitment of phagocytes to the biomaterials compared to histological analysis. CONCLUSIONS/SIGNIFICANCE: Histological analysis similarity validates this technique as a novel, rapid approach for screening biocompatibility of implanted materials. Through this technique there exists the possibility to rapidly screen large libraries of polymers in vivo.

  14. Rapid Neutron Capture Process in Supernovae and Chemical ...

    Indian Academy of Sciences (India)

    process in the supernova envelope at a high neutron density and a temperature of 109 degrees. ... Major advances have been made in calculating r-process .... Also electron capture on free protons is limited by the small abundance of free protons. These prob- lems are eased by higher density and higher temperature, ...

  15. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review.

    Science.gov (United States)

    Sheridan, Heather; Reingold, Eyal M

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise.

  16. MATLAB-Based Applications for Image Processing and Image Quality Assessment – Part I: Software Description

    Directory of Open Access Journals (Sweden)

    L. Krasula

    2011-12-01

    Full Text Available This paper describes several MATLAB-based applications useful for image processing and image quality assessment. The Image Processing Application helps user to easily modify images, the Image Quality Adjustment Application enables to create series of pictures with different quality. The Image Quality Assessment Application contains objective full reference quality metrics that can be used for image quality assessment. The Image Quality Evaluation Applications represent an easy way to compare subjectively the quality of distorted images with reference image. Results of these subjective tests can be processed by using the Results Processing Application. All applications provide Graphical User Interface (GUI for the intuitive usage.

  17. Facial Edema Evaluation Using Digital Image Processing

    Directory of Open Access Journals (Sweden)

    A. E. Villafuerte-Nuñez

    2013-01-01

    Full Text Available The main objective of the facial edema evaluation is providing the needed information to determine the effectiveness of the anti-inflammatory drugs in development. This paper presents a system that measures the four main variables present in facial edemas: trismus, blush (coloration, temperature, and inflammation. Measurements are obtained by using image processing and the combination of different devices such as a projector, a PC, a digital camera, a thermographic camera, and a cephalostat. Data analysis and processing are performed using MATLAB. Facial inflammation is measured by comparing three-dimensional reconstructions of inflammatory variations using the fringe projection technique. Trismus is measured by converting pixels to centimeters in a digitally obtained image of an open mouth. Blushing changes are measured by obtaining and comparing the RGB histograms from facial edema images at different times. Finally, temperature changes are measured using a thermographic camera. Some tests using controlled measurements of every variable are presented in this paper. The results allow evaluating the measurement system before its use in a real test, using the pain model approved by the US Food and Drug Administration (FDA, which consists in extracting the third molar to generate the facial edema.

  18. Single-Trial Event-Related Potential Based Rapid Image Triage System

    Directory of Open Access Journals (Sweden)

    Ke Yu

    2011-06-01

    Full Text Available Searching for points of interest (POI in large-volume imagery is a challenging problem with few good solutions. In this work, a neural engineering approach called rapid image triage (RIT which could offer about a ten-fold speed up in POI searching is developed. It is essentially a cortically-coupled computer vision technique, whereby the user is presented bursts of images at a speed of 6–15 images per second and then neural signals called event-related potential (ERP is used as the ‘cue’ for user seeing images of high relevance likelihood. Compared to past efforts, the implemented system has several unique features: (1 it applies overlapping frames in image chip preparation, to ensure rapid image triage performance; (2 a novel common spatial-temporal pattern (CSTP algorithm that makes use of both spatial and temporal patterns of ERP topography is proposed for high-accuracy single-trial ERP detection; (3 a weighted version of probabilistic support-vector-machine (SVM is used to address the inherent unbalanced nature of single-trial ERP detection for RIT. High accuracy, fast learning, and real-time capability of the developed system shown on 20 subjects demonstrate the feasibility of a brainmachine integrated rapid image triage system for fast detection of POI from large-volume imagery.

  19. Portable EDITOR (PEDITOR): A portable image processing system. [satellite images

    Science.gov (United States)

    Angelici, G.; Slye, R.; Ozga, M.; Ritter, P.

    1986-01-01

    The PEDITOR image processing system was created to be readily transferable from one type of computer system to another. While nearly identical in function and operation to its predecessor, EDITOR, PEDITOR employs additional techniques which greatly enhance its portability. These cover system structure and processing. In order to confirm the portability of the software system, two different types of computer systems running greatly differing operating systems were used as target machines. A DEC-20 computer running the TOPS-20 operating system and using a Pascal Compiler was utilized for initial code development. The remaining programmers used a Motorola Corporation 68000-based Forward Technology FT-3000 supermicrocomputer running the UNIX-based XENIX operating system and using the Silicon Valley Software Pascal compiler and the XENIX C compiler for their initial code development.

  20. Imprecise Arithmetic for Low Power Image Processing

    DEFF Research Database (Denmark)

    Albicocco, Pietro; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2012-01-01

    Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, additio...... and multiplication, in an imprecise manner by simplifying the hardware implementation. With the proposed ”sloppy” operations, we obtain a reduction in delay, area and power dissipation, and the error introduced is still acceptable for applications such as image processing.......Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, addition...

  1. Development of the SOFIA Image Processing Tool

    Science.gov (United States)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  2. Rapid, low dose X-ray diffractive imaging of the malaria parasite Plasmodium falciparum

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Michael W.M., E-mail: michael.jones@latrobe.edu.au [ARC Centre of Excellence for Coherent X-Ray Science, Department of Physics, La Trobe University, Victoria 3086 (Australia); Dearnley, Megan K. [ARC Centre of Excellence for Coherent X-Ray Science, Department of Biochemistry and Molecular Biology, Bio21 Institute, The University of Melbourne, Victoria 3010 (Australia); Riessen, Grant A. van [ARC Centre of Excellence for Coherent X-Ray Science, Department of Physics, La Trobe University, Victoria 3086 (Australia); Abbey, Brian [ARC Centre of Excellence for Coherent X-Ray Science, Department of Physics, La Trobe University, Victoria 3086 (Australia); Melbourne Centre for Nanofabrication, Victoria 3168 (Australia); Putkunz, Corey T. [ARC Centre of Excellence for Coherent X-Ray Science, School of Physics, The University of Melbourne, Victoria 3010 (Australia); Junker, Mark D. [ARC Centre of Excellence for Coherent X-Ray Science, Department of Physics, La Trobe University, Victoria 3086 (Australia); Vine, David J. [Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439 (United States); McNulty, Ian [Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439 (United States); Centre for Nanoscale Materials, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Nugent, Keith A. [ARC Centre of Excellence for Coherent X-Ray Science, Department of Physics, La Trobe University, Victoria 3086 (Australia); Peele, Andrew G. [ARC Centre of Excellence for Coherent X-Ray Science, Department of Physics, La Trobe University, Victoria 3086 (Australia); Australian Synchrotron, 800 Blackburn Road, Clayton 3168 (Australia); Tilley, Leann [ARC Centre of Excellence for Coherent X-Ray Science, Department of Biochemistry and Molecular Biology, Bio21 Institute, The University of Melbourne, Victoria 3010 (Australia)

    2014-08-01

    Phase-diverse X-ray coherent diffractive imaging (CDI) provides a route to high sensitivity and spatial resolution with moderate radiation dose. It also provides a robust solution to the well-known phase-problem, making on-line image reconstruction feasible. Here we apply phase-diverse CDI to a cellular sample, obtaining images of an erythrocyte infected by the sexual stage of the malaria parasite, Plasmodium falciparum, with a radiation dose significantly lower than the lowest dose previously reported for cellular imaging using CDI. The high sensitivity and resolution allow key biological features to be identified within intact cells, providing complementary information to optical and electron microscopy. This high throughput method could be used for fast tomographic imaging, or to generate multiple replicates in two-dimensions of hydrated biological systems without freezing or fixing. This work demonstrates that phase-diverse CDI is a valuable complementary imaging method for the biological sciences and ready for immediate application. - Highlights: • Phase-diverse coherent X-ray diffraction microscopy provides high-resolution and high-contrast images of intact biological samples. • Rapid nanoscale resolution imaging is demonstrated at orders of magnitude lower dose than previously possible. • Phase-diverse coherent X-ray diffraction microscopy is a robust technique for rapid, quantitative, and correlative X-ray phase imaging.

  3. HYMOSS signal processing for pushbroom spectral imaging

    Science.gov (United States)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  4. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  5. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement

    Science.gov (United States)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.

    2017-11-01

    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  6. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  7. Digital signal and image processing using Matlab

    CERN Document Server

    Blanchet , Gérard

    2015-01-01

    The most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals, the theory being supported by exercises and computer simulations relating to real applications.   More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.  Following on from the first volume, this second installation takes a more practical stance, provi

  8. Digital signal and image processing using MATLAB

    CERN Document Server

    Blanchet , Gérard

    2014-01-01

    This fully revised and updated second edition presents the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications. More than 200 programs and functions are provided in the MATLABÒ language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject. This fully revised new edition updates : - the

  9. Do the eyes scan dream images during rapid eye movement sleep? Evidence from the rapid eye movement sleep behaviour disorder model.

    Science.gov (United States)

    Leclair-Visonneau, Laurène; Oudiette, Delphine; Gaymard, Bertrand; Leu-Semenescu, Smaranda; Arnulf, Isabelle

    2010-06-01

    Rapid eye movements and complex visual dreams are salient features of human rapid eye movement sleep. However, it remains to be elucidated whether the eyes scan dream images, despite studies that have retrospectively compared the direction of rapid eye movements to the dream recall recorded after having awakened the sleeper. We used the model of rapid eye movement sleep behaviour disorder (when patients enact their dreams by persistence of muscle tone) to determine directly whether the eyes move in the same directions as the head and limbs. In 56 patients with rapid eye movement sleep behaviour disorder and 17 healthy matched controls, the eye movements were monitored by electrooculography in four (right, left, up and down) directions, calibrated with a target and synchronized with video and sleep monitoring. The rapid eye movement sleep behaviour disorder-associated behaviours occurred 2.1 times more frequently during rapid eye movement sleep with than without rapid eye movements, and more often during or after rapid eye movements than before. Rapid eye movement density, index and complexity were similar in patients with rapid eye movement sleep behaviour disorder and controls. When rapid eye movements accompanied goal-oriented motor behaviour during rapid eye movement sleep behaviour disorder (e.g. grabbing a fictive object, hand greetings, climbing a ladder), which happened in 19 sequences, 82% were directed towards the action of the patient (same plane and direction). When restricted to the determinant rapid eye movements, the concordance increased to 90%. Rapid eye movements were absent in 38-42% of behaviours. This directional coherence between limbs, head and eye movements during rapid eye movement sleep behaviour disorder suggests that, when present, rapid eye movements imitate the scanning of the dream scene. Since the rapid eye movements are similar in subjects with and without rapid eye movement sleep behaviour disorder, this concordance can be extended

  10. Using Image Processing to Determine Emphysema Severity

    Science.gov (United States)

    McKenzie, Alexander; Sadun, Alberto

    2010-10-01

    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  11. Image processing to optimize wave energy converters

    Science.gov (United States)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  12. Computerized image processing in the Reginald Denny beating trial

    Science.gov (United States)

    Morrison, Lawrence C.

    1997-02-01

    New image processing techniques may have significant benefits to law enforcement officials but need to be legally admissible in court. Courts have different tests for determining the admissibility of new scientific procedures, requiring their reliability to be established by expert testimony. The first test developed was whether there has been general acceptance of the new procedure within the scientific community. In 1993 the U.S. Supreme Court loosened the requirements for admissibility of new scientific techniques, although the California Supreme Court later retained the general acceptance test. What the proper standard is for admission of such evidence is important to both the technical community and to the legal community because of the conflict between benefits of rapidly developing technology, and the dangers of 'junk science.' The Reginald Denny beating case from the 1992 Los Angeles riots proved the value of computerized image processing in identifying persons committing crimes on videotape. The segmentation process was used to establish the presence of a tattoo on one defendant, which was key in his identification. Following the defendant's conviction, the California Court of Appeal approved the use of the evidence involving the segmentation process. This published opinion may be cited as legal precedent.

  13. Rapid digestion process for determination of trichinellae in meat

    Energy Technology Data Exchange (ETDEWEB)

    Giles, P.M.

    1975-07-01

    This patent relates to an accelerated digestion process for releasing trichinellae from meat (usually pork) as excysted and encysted worms in transparent fluid whereby they may be easily identified and/or counted visually or automatically. This improved digestion process for the determination of trichinellae in meat comprises placing the meat in a blender, adding a digestant consisting of one of the following ingredients, namely sodium hypochlorite, hydrochloric acid and pepsin, bromelin, trypsin, or dilute papain; and blending the meat and digestant for about one minute and then pouring the solution into a receptacle, allowing particulate to settle to the bottom, extracting samples from the bottom of said receptacle, and visually or automatically identifying and/or counting any trichinellae that may be present. (auth)

  14. Rapid Neutron Capture Process in Supernovae and Chemical ...

    Indian Academy of Sciences (India)

    We have studied the r-process path corresponding to temperatures ranging from 1.0 × 109 K to 3.0 × 109 K and neutron density ranging from 1020 cm-3 to 1030 cm-3. With temperature and density conditions of 3.0 × 109 K and 1020 cm-3 a nucleus of mass 273 was theoretically found corresponding to atomic number 115.

  15. Rapid Prototyping of High Performance Signal Processing Applications

    Science.gov (United States)

    2011-01-01

    Bank Ultimate Pulsar Processing Instrument (GUPPI at the NRAO, Green Bank , finds its use in the spectrometers currently under development for the GBT...rates. The single dish Green Bank Telescope (GBT) [68], for example, is used for pulsar searches and high-precision timing studies, which drive...their support. I am extremely thankful to Dr. Richard Prestage of the NRAO, Green Bank , WV, for his initial support and continued guidance in

  16. Operational SAR Data Processing in GIS Environments for Rapid Disaster Mapping

    Science.gov (United States)

    Bahr, Thomas

    2014-05-01

    The use of SAR data has become increasingly popular in recent years and in a wide array of industries. Having access to SAR can be highly important and critical especially for public safety. Updating a GIS with contemporary information from SAR data allows to deliver a reliable set of geospatial information to advance civilian operations, e.g. search and rescue missions. SAR imaging offers the great advantage, over its optical counterparts, of not being affected by darkness, meteorological conditions such as clouds, fog, etc., or smoke and dust, frequently associated with disaster zones. In this paper we present the operational processing of SAR data within a GIS environment for rapid disaster mapping. For this technique we integrated the SARscape modules for ENVI with ArcGIS®, eliminating the need to switch between software packages. Thereby the premier algorithms for SAR image analysis can be directly accessed from ArcGIS desktop and server environments. They allow processing and analyzing SAR data in almost real time and with minimum user interaction. This is exemplified by the November 2010 flash flood in the Veneto region, Italy. The Bacchiglione River burst its banks on Nov. 2nd after two days of heavy rainfall throughout the northern Italian region. The community of Bovolenta, 22 km SSE of Padova, was covered by several meters of water. People were requested to stay in their homes; several roads, highways sections and railroads had to be closed. The extent of this flooding is documented by a series of Cosmo-SkyMed acquisitions with a GSD of 2.5 m (StripMap mode). Cosmo-SkyMed is a constellation of four Earth observation satellites, allowing a very frequent coverage, which enables monitoring using a very high temporal resolution. This data is processed in ArcGIS using a single-sensor, multi-mode, multi-temporal approach consisting of 3 steps: (1) The single images are filtered with a Gamma DE-MAP filter. (2) The filtered images are geocoded using a reference

  17. Platform for distributed image processing and image retrieval

    Science.gov (United States)

    Gueld, Mark O.; Thies, Christian J.; Fischer, Benedikt; Keysers, Daniel; Wein, Berthold B.; Lehmann, Thomas M.

    2003-06-01

    We describe a platform for the implementation of a system for content-based image retrieval in medical applications (IRMA). To cope with the constantly evolving medical knowledge, the platform offers a flexible feature model to store and uniformly access all feature types required within a multi-step retrieval approach. A structured generation history for each feature allows the automatic identification and re-use of already computed features. The platform uses directed acyclic graphs composed of processing steps and control elements to model arbitrary retrieval algorithms. This visually intuitive, data-flow oriented representation vastly improves the interdisciplinary communication between computer scientists and physicians during the development of new retrieval algorithms. The execution of the graphs is fully automated within the platform. Each processing step is modeled as a feature transformation. Due to a high degree of system transparency, both the implementation and the evaluation of retrieval algorithms are accelerated significantly. The platform uses a client-server architecture consisting of a central database, a central job scheduler, instances of a daemon service, and clients which embed user-implemented feature ansformations. Automatically distributed batch processing and distributed feature storage enable the cost-efficient use of an existing workstation cluster.

  18. Deformable Mirror Light Modulators For Image Processing

    Science.gov (United States)

    Boysel, R. Mark; Florence, James M.; Wu, Wen-Rong

    1990-02-01

    The operational characteristics of deformable mirror device (DMD) spatial light modulators for image processing applications are presented. The two DMD pixel structures of primary interest are the torsion hinged pixel for amplitude modulation and the flexure hinged or piston element pixel for phase modulation. The optical response characteristics of these structures are described. Experimental results detailing the performance of the pixel structures and addressing architectures are presented and are compared with the analytical results. Special emphasis is placed on the specification, from the experimental data, of the basic device performance parameters of the different modulator types. These parameters include modulation range (contrast ratio and phase modulation depth), individual pixel response time, and full array address time. The performance characteristics are listed for comparison with those of other light modulators (LCLV, LCTV, and MOSLM) for applications in the input plane and Fourier plane of a conventional coherent optical image processing system. The strengths and weaknesses of the existing DMD modulators are assessed and the potential for performance improvements is outlined.

  19. Click-Chemistry-Mediated Rapid Microbubble Capture for Acute Thrombus Ultrasound Molecular Imaging.

    Science.gov (United States)

    Wang, Tuantuan; Yuan, Chuxiao; Dai, Bingyang; Liu, Yang; Li, Mingxi; Feng, Zhenqiang; Jiang, Qing; Xu, Zhihong; Zhao, Ningwei; Gu, Ning; Yang, Fang

    2017-07-18

    Bioorthogonal coupling chemistry has been studied as a potentially advantageous approach for molecular imaging because it offers rapid, efficient, and strong binding, which might also benefit stability, production, and chemical conjugation. The inverse-electron-demand Diels-Alder reaction between a 1,2,4,5-tetrazine and trans-cyclooctene (TCO) is an example of a highly selective and rapid bioorthogonal coupling reaction that has been used successfully to prepare targeted molecular imaging probes. Here we report a fast, reliable, and highly sensitive approach, based on a two-step pretargeting bioorthogonal approach, to achieving activated-platelet-specific CD62p-targeted thrombus ultrasound molecular imaging. Tetrazine-modified microbubbles (tetra-MBs) could be uniquely and rapidly captured by subsequent click chemistry of thrombus tagged with a trans-cyclooctene-pretreated CD62p antibody. Moreover, such tetra-MBs showed great long-term stability under physiological conditions, thus offering the ability to monitor thrombus changes in real time. We demonstrated for the first time that a bioorthogonal targeting molecular ultrasound imaging strategy based on tetra-MBs could be a simple but powerful tool for rapid diagnosis of acute thrombosis. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  1. Systems and methods for rapid processing and storage of data

    Science.gov (United States)

    Stalzer, Mark A.

    2017-01-24

    Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.

  2. Rapid eye movement sleep behavior disorder in Parkinson's disease: magnetic resonance imaging study.

    Science.gov (United States)

    Ford, Andrew H; Duncan, Gordon W; Firbank, Michael J; Yarnall, Alison J; Khoo, Tien K; Burn, David J; O'Brien, John T

    2013-06-01

    Rapid eye movement sleep behavior disorder has poor prognostic implications for Parkinson's disease. The authors recruited 124 patients with early Parkinson's disease to compare clinical and neuroimaging findings based on the presence of this sleep disorder. The presence of rapid eye movement sleep behavior disorder was assessed with the Mayo Sleep Questionnaire. Magnetic resonance imaging sequences were obtained for voxel-based morphometry and diffusion tensor imaging. Patients with sleep disorder had more advanced disease, but groups had similar clinical characteristics and cognitive performance. Those with sleep disorder had areas of reduced cortical grey matter volume and white matter changes compared with those who did not have sleep disorder. However, differences were slight and were not significant when the analyses were adjusted for multiple comparisons. Rapid eye movement sleep behavior disorder was associated with subtle changes in white matter integrity and grey matter volume in patients with early Parkinson's disease. Copyright © 2013 Movement Disorder Society.

  3. Rapid Multi-Tracer PET Tumor Imaging With 18F-FDG and Secondary Shorter-Lived Tracers

    OpenAIRE

    Black, Noel F.; McJames, Scott; Kadrmas, Dan J

    2009-01-01

    Rapid multi-tracer PET, where two to three PET tracers are rapidly scanned with staggered injections, can recover certain imaging measures for each tracer based on differences in tracer kinetics and decay. We previously showed that single-tracer imaging measures can be recovered to a certain extent from rapid dual-tracer 62Cu – PTSM (blood flow) + 62Cu — ATSM (hypoxia) tumor imaging. In this work, the feasibility of rapidly imaging 18F-FDG plus one or two of these shorter-lived secondary trac...

  4. Rapid and continuous analyte processing in droplet microfluidic devices

    Energy Technology Data Exchange (ETDEWEB)

    Strey, Helmut; Kimmerling, Robert; Bakowski, Tomasz

    2017-04-18

    The compositions and methods described herein are designed to introduce functionalized microparticles into droplets that can be manipulated in microfluidic devices by fields, including electric (dielectrophoretic) or magnetic fields, and extracted by splitting a droplet to separate the portion of the droplet that contains the majority of the microparticles from the part that is largely devoid of the microparticles. Within the device, channels are variously configured at Y- or T junctions that facilitate continuous, serial isolation and dilution of analytes in solution. The devices can be limited in the sense that they can be designed to output purified analytes that are then further analyzed in separate machines or they can include additional channels through which purified analytes can be further processed and analyzed.

  5. Interannual Change Detection of Mediterranean Seagrasses Using RapidEye Image Time Series

    Directory of Open Access Journals (Sweden)

    Dimosthenis Traganos

    2018-02-01

    Full Text Available Recent research studies have highlighted the decrease in the coverage of Mediterranean seagrasses due to mainly anthropogenic activities. The lack of data on the distribution of these significant aquatic plants complicates the quantification of their decreasing tendency. While Mediterranean seagrasses are declining, satellite remote sensing technology is growing at an unprecedented pace, resulting in a wealth of spaceborne image time series. Here, we exploit recent advances in high spatial resolution sensors and machine learning to study Mediterranean seagrasses. We process a multispectral RapidEye time series between 2011 and 2016 to detect interannual seagrass dynamics in 888 submerged hectares of the Thermaikos Gulf, NW Aegean Sea, Greece (eastern Mediterranean Sea. We assess the extent change of two Mediterranean seagrass species, the dominant Posidonia oceanica and Cymodocea nodosa, following atmospheric and analytical water column correction, as well as machine learning classification, using Random Forests, of the RapidEye time series. Prior corrections are necessary to untangle the initially weak signal of the submerged seagrass habitats from satellite imagery. The central results of this study show that P. oceanica seagrass area has declined by 4.1%, with a trend of −11.2 ha/yr, while C. nodosa seagrass area has increased by 17.7% with a trend of +18 ha/yr throughout the 5-year study period. Trends of change in spatial distribution of seagrasses in the Thermaikos Gulf site are in line with reported trends in the Mediterranean. Our presented methodology could be a time- and cost-effective method toward the quantitative ecological assessment of seagrass dynamics elsewhere in the future. From small meadows to whole coastlines, knowledge of aquatic plant dynamics could resolve decline or growth trends and accurately highlight key units for future restoration, management, and conservation.

  6. Imaging thiol redox status in murine tumors in vivo with rapid-scan electron paramagnetic resonance

    Science.gov (United States)

    Epel, Boris; Sundramoorthy, Subramanian V.; Krzykawska-Serda, Martyna; Maggio, Matthew C.; Tseytlin, Mark; Eaton, Gareth R.; Eaton, Sandra S.; Rosen, Gerald M.; Kao, Joseph P. Y.; Halpern, Howard J.

    2017-03-01

    Thiol redox status is an important physiologic parameter that affects the success or failure of cancer treatment. Rapid scan electron paramagnetic resonance (RS EPR) is a novel technique that has shown higher signal-to-noise ratio than conventional continuous-wave EPR in in vitro studies. Here we used RS EPR to acquire rapid three-dimensional images of the thiol redox status of tumors in living mice. This work presents, for the first time, in vivo RS EPR images of the kinetics of the reaction of 2H,15N-substituted disulfide-linked dinitroxide (PxSSPx) spin probe with intracellular glutathione. The cleavage rate is proportional to the intracellular glutathione concentration. Feasibility was demonstrated in a FSa fibrosarcoma tumor model in C3H mice. Similar to other in vivo and cell model studies, decreasing intracellular glutathione concentration by treating mice with L-buthionine sulfoximine (BSO) markedly altered the kinetic images.

  7. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing

    2008-01-01

    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  8. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    National Research Council Canada - National Science Library

    J. Manikandan; C.S. Celin; V.M. Gayathri

    2015-01-01

    ...), research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP) [2...

  9. Effects of image processing on the detective quantum efficiency

    Science.gov (United States)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  10. Intelligent elevator management system using image processing

    Science.gov (United States)

    Narayanan, H. Sai; Karunamurthy, Vignesh; Kumar, R. Barath

    2015-03-01

    In the modern era, the increase in the number of shopping malls and industrial building has led to an exponential increase in the usage of elevator systems. Thus there is an increased need for an effective control system to manage the elevator system. This paper is aimed at introducing an effective method to control the movement of the elevators by considering various cases where in the location of the person is found and the elevators are controlled based on various conditions like Load, proximity etc... This method continuously monitors the weight limit of each elevator while also making use of image processing to determine the number of persons waiting for an elevator in respective floors. Canny edge detection technique is used to find out the number of persons waiting for an elevator. Hence the algorithm takes a lot of cases into account and locates the correct elevator to service the respective persons waiting in different floors.

  11. Simulink Component Recognition Using Image Processing

    Directory of Open Access Journals (Sweden)

    Ramya R

    2015-02-01

    Full Text Available ABSTRACT In early stages of engineering design pen-and-paper sketches are often used to quickly convey concepts and ideas. Free-form drawing is often preferable to using computer interfaces due to its ease of use fluidity and lack of constraints. The objective of this project is to create a trainable sketched Simulink component recognizer and classifying the individual Simulink components from the input block diagram. The recognized components will be placed on the new Simulink model window after which operations can be performed over them. Noise from the input image is removed by Median filter the segmentation process is done by K-means clustering algorithm and recognition of individual Simulink components from the input block diagram is done by Euclidean distance. The project aims to devise an efficient way to segment a control system block diagram into individual components for recognition.

  12. Rapid e-Learning Tools Selection Process for Cognitive and Psychomotor Learning Objectives

    Science.gov (United States)

    Ku, David Tawei; Huang, Yung-Hsin

    2012-01-01

    This study developed a decision making process for the selection of rapid e-learning tools that could match different learning domains. With the development of the Internet, the speed of information updates has become faster than ever. E-learning has rapidly become the mainstream for corporate training and academic instruction. In order to reduce…

  13. After Nearly A Decade Of Rapid Growth, Use And Complexity Of Imaging Declined, 2008-14.

    Science.gov (United States)

    Levin, David C; Parker, Laurence; Palit, Charles D; Rao, Vijay M

    2017-04-01

    Imaging is an important cost driver in health care, and its use grew rapidly in the early 2000s. Several studies toward the end of the decade suggested that a leveling off was beginning to occur. In this study we examined more recent data to determine whether the slowdown had continued. Our data sources were the nationwide Medicare Part B databases for the period 2001-14. We calculated utilization rates per 1,000 enrollees for all advanced imaging modalities. We also calculated professional component relative value unit (RVU) rates per 1,000 beneficiaries for all imaging modalities, as RVU values provide a measure of complexity of imaging services and may in some ways be a better reflection of the amount of work involved in imaging. We found that utilization rates and RVU rates grew substantially until 2008 and 2009, respectively, and then began to drop. The downward trend in both rates persisted through 2014. Federal policies appear to have achieved the desired effect of ending the rapid growth of imaging that had been seen in earlier years. Project HOPE—The People-to-People Health Foundation, Inc.

  14. Knowledge-based approach to medical image processing monitoring

    Science.gov (United States)

    Chameroy, Virginie; Aubry, Florent; Di Paola, Robert

    1995-05-01

    The clinical use of image processing requires both medical knowledge and expertise in image processing techniques. We have designed a knowledge-based interactive quantification support system (IQSS) to help the medical user in the use and evaluation of medical image processing, and in the development of specific protocols. As the user proceeds according to a heuristic and intuitive approach, our system is meant to work according to a similar behavior. At the basis of the reasoning of our monitoring system, there are the semantic features of an image and of image processing. These semantic features describe their intrinsic properties, and are not symbolic description of the image content. Their obtention requires modeling of medical image and of image processing procedures. Semantic interpretation function gives rules to obtain the values of the semantic features extracted from these models. Then, commonsense compatibility rules yield to compatibility criteria which are based on a partial order (a subsumption relationship) on image and image processing, enabling a comparison to be made between data available to be processed and appropriate image processing procedures. This knowledge-based approach makes IQSS modular, flexible and consequently well adapted to aid in the development and in the utilization of image processing methods for multidimensional and multimodality medical image quantification.

  15. Image processing of 2D resistivity data for imaging faults

    Science.gov (United States)

    Nguyen, F.; Garambois, S.; Jongmans, D.; Pirard, E.; Loke, M. H.

    2005-07-01

    A methodology to locate automatically limits or boundaries between different geological bodies in 2D electrical tomography is proposed, using a crest line extraction process in gradient images. This method is applied on several synthetic models and on field data set acquired on three experimental sites during the European project PALEOSIS where trenches were dug. The results presented in this work are valid for electrical tomographies data collected with a Wenner-alpha array and computed with an l 1 norm (blocky inversion) as optimization method. For the synthetic cases, three geometric contexts are modelled: a vertical and a dipping fault juxtaposing two different geological formations and a step-like structure. A superficial layer can cover each geological structure. In these three situations, the method locates the synthetic faults and layer boundaries, and determines fault displacement but with several limitations. The estimated fault positions correlate exactly with the synthetic ones if a conductive (or no superficial) layer overlies the studied structure. When a resistive layer with a thickness of 6 m covers the model, faults are positioned with a maximum error of 1 m. Moreover, when a resistive and/or a thick top layer is present, the resolution significantly decreases for the fault displacement estimation (error up to 150%). The tests with the synthetic models for surveys using the Wenner-alpha array indicate that the proposed methodology is best suited to vertical and horizontal contacts. Application of the methodology to real data sets shows that a lateral resistivity contrast of 1:5-1:10 leads to exact faults location. A fault contact with a resistivity contrast of 1:0.75 and overlaid by a resistive layer with a thickness of 1 m gives an error location ranging from 1 to 3 m. Moreover, no result is obtained for a contact with very low contrasts (˜1:0.85) overlaid by a resistive soil. The method shows poor results when vertical gradients are greater than

  16. Evaluation of rapid volume changes of substrate-adherent cells by conventional microscopy 3D imaging.

    Science.gov (United States)

    Boudreault, F; Grygorczyk, R

    2004-09-01

    Precise measurement of rapid volume changes of substrate-adherent cells is essential to understand many aspects of cell physiology, yet techniques to evaluate volume changes with sufficient precision and high temporal resolution are limited. Here, we describe a novel imaging method that surveys the rapid morphology modifications of living, substrate-adherent cells based on phase-contrast, digital video microscopy. Cells grown on a glass substrate are mounted in a custom-designed, side-viewing chamber and subjected to hypotonic swelling. Side-view images of the rapidly swelling cell, and at the end of the assay, an image of the same cell viewed from a perpendicular direction through the substrate, are acquired. Based on these images, off-line reconstruction of 3D cell morphology is performed, which precisely measures cell volume, height and surface at different points during cell volume changes. Volume evaluations are comparable to those obtained by confocal laser scanning microscopy (DeltaVolume microscopy without the need for cell staining or intense illumination to monitor cell volume make this system a promising new tool to investigate the fundamentals of cell volume physiology.

  17. Rapid Sequential in Situ Multiplexing with DNA Exchange Imaging in Neuronal Cells and Tissues.

    Science.gov (United States)

    Wang, Yu; Woehrstein, Johannes B; Donoghue, Noah; Dai, Mingjie; Avendaño, Maier S; Schackmann, Ron C J; Zoeller, Jason J; Wang, Shan Shan H; Tillberg, Paul W; Park, Demian; Lapan, Sylvain W; Boyden, Edward S; Brugge, Joan S; Kaeser, Pascal S; Church, George M; Agasti, Sarit S; Jungmann, Ralf; Yin, Peng

    2017-10-11

    To decipher the molecular mechanisms of biological function, it is critical to map the molecular composition of individual cells or even more importantly tissue samples in the context of their biological environment in situ. Immunofluorescence (IF) provides specific labeling for molecular profiling. However, conventional IF methods have finite multiplexing capabilities due to spectral overlap of the fluorophores. Various sequential imaging methods have been developed to circumvent this spectral limit but are not widely adopted due to the common limitation of requiring multirounds of slow (typically over 2 h at room temperature to overnight at 4 °C in practice) immunostaining. We present here a practical and robust method, which we call DNA Exchange Imaging (DEI), for rapid in situ spectrally unlimited multiplexing. This technique overcomes speed restrictions by allowing for single-round immunostaining with DNA-barcoded antibodies, followed by rapid (less than 10 min) buffer exchange of fluorophore-bearing DNA imager strands. The programmability of DEI allows us to apply it to diverse microscopy platforms (with Exchange Confocal, Exchange-SIM, Exchange-STED, and Exchange-PAINT demonstrated here) at multiple desired resolution scales (from ∼300 nm down to sub-20 nm). We optimized and validated the use of DEI in complex biological samples, including primary neuron cultures and tissue sections. These results collectively suggest DNA exchange as a versatile, practical platform for rapid, highly multiplexed in situ imaging, potentially enabling new applications ranging from basic science, to drug discovery, and to clinical pathology.

  18. Hyperspectral image representation and processing with binary partition trees

    OpenAIRE

    Valero Valbuena, Silvia

    2012-01-01

    Premi extraordinari doctorat curs 2011-2012, àmbit Enginyeria de les TIC The optimal exploitation of the information provided by hyperspectral images requires the development of advanced image processing tools. Therefore, under the title Hyperspectral image representation and Processing with Binary Partition Trees, this PhD thesis proposes the construction and the processing of a new region-based hierarchical hyperspectral image representation: the Binary Partition Tree (BPT). This hierarc...

  19. Quantitative immunocytochemistry using an image analyzer. I. Hardware evaluation, image processing, and data analysis.

    Science.gov (United States)

    Mize, R R; Holdefer, R N; Nabors, L B

    1988-11-01

    In this review we describe how video-based image analysis systems are used to measure immunocytochemically labeled tissue. The general principles underlying hardware and software procedures are emphasized. First, the characteristics of image analyzers are described, including the densitometric measure, spatial resolution, gray scale resolution, dynamic range, and acquisition and processing speed. The errors produced by these instruments are described and methods for correcting or reducing the errors are discussed. Methods for evaluating image analyzers are also presented, including spatial resolution, photometric transfer function, short- and long-term temporal variability, and measurement error. The procedures used to measure immunocytochemically labeled cells and fibers are then described. Immunoreactive profiles are imaged and enhanced using an edge sharpening operator and then extracted using segmentation, a procedure which captures all labeled profiles above a threshold gray level. Binary operators, including erosion and dilation, are applied to separate objects and to remove artifacts. The software then automatically measures the geometry and optical density of the extracted profiles. The procedures are rapid and efficient methods for measuring simultaneously the position, geometry, and labeling intensity of immunocytochemically labeled tissue, including cells, fibers, and whole fields. A companion paper describes non-biological standards we have developed to estimate antigen concentration from the optical density produced by antibody labeling (Nabors et al., 1988).

  20. Spot restoration for GPR image post-processing

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  1. The potential for early and rapid pathogen detection within poultry processing through hyperspectral microscopy

    Science.gov (United States)

    The acquisition of hyperspectral microscopic images containing both spatial and spectral data has shown potential for the early and rapid optical classification of foodborne pathogens. A hyperspectral microscope with a metal halide light source and acousto-optical tunable filter (AOTF) collects 89 ...

  2. Rapid mapping of digital integrated circuit logic gates via multi-spectral backside imaging

    CERN Document Server

    Adato, Ronen; Zangeneh, Mahmoud; Zhou, Boyou; Joshi, Ajay; Goldberg, Bennett; Unlu, M Selim

    2016-01-01

    Modern semiconductor integrated circuits are increasingly fabricated at untrusted third party foundries. There now exist myriad security threats of malicious tampering at the hardware level and hence a clear and pressing need for new tools that enable rapid, robust and low-cost validation of circuit layouts. Optical backside imaging offers an attractive platform, but its limited resolution and throughput cannot cope with the nanoscale sizes of modern circuitry and the need to image over a large area. We propose and demonstrate a multi-spectral imaging approach to overcome these obstacles by identifying key circuit elements on the basis of their spectral response. This obviates the need to directly image the nanoscale components that define them, thereby relaxing resolution and spatial sampling requirements by 1 and 2 - 4 orders of magnitude respectively. Our results directly address critical security needs in the integrated circuit supply chain and highlight the potential of spectroscopic techniques to addres...

  3. Using multimodal imaging techniques to monitor limb ischemia: a rapid noninvasive method for assessing extremity wounds

    Science.gov (United States)

    Luthra, Rajiv; Caruso, Joseph D.; Radowsky, Jason S.; Rodriguez, Maricela; Forsberg, Jonathan; Elster, Eric A.; Crane, Nicole J.

    2013-03-01

    Over 70% of military casualties resulting from the current conflicts sustain major extremity injuries. Of these the majority are caused by blasts from improvised explosive devices. The resulting injuries include traumatic amputations, open fractures, crush injuries, and acute vascular disruption. Critical tissue ischemia—the point at which ischemic tissues lose the capacity to recover—is therefore a major concern, as lack of blood flow to tissues rapidly leads to tissue deoxygenation and necrosis. If left undetected or unaddressed, a potentially salvageable limb may require more extensive debridement or, more commonly, amputation. Predicting wound outcome during the initial management of blast wounds remains a significant challenge, as wounds continue to "evolve" during the debridement process and our ability to assess wound viability remains subjectively based. Better means of identifying critical ischemia are needed. We developed a swine limb ischemia model in which two imaging modalities were combined to produce an objective and quantitative assessment of wound perfusion and tissue viability. By using 3 Charge-Coupled Device (3CCD) and Infrared (IR) cameras, both surface tissue oxygenation as well as overall limb perfusion could be depicted. We observed a change in mean 3CCD and IR values at peak ischemia and during reperfusion correlate well with clinically observed indicators for limb function and vitality. After correcting for baseline mean R-B values, the 3CCD values correlate with surface tissue oxygenation and the IR values with changes in perfusion. This study aims to not only increase fundamental understanding of the processes involved with limb ischemia and reperfusion, but also to develop tools to monitor overall limb perfusion and tissue oxygenation in a clinical setting. A rapid and objective diagnostic for extent of ischemic damage and overall limb viability could provide surgeons with a more accurate indication of tissue viability. This may

  4. On-site Rapid Diagnosis of Intracranial Hematoma using Portable Multi-slice Microwave Imaging System

    Science.gov (United States)

    Mobashsher, Ahmed Toaha; Abbosh, A. M.

    2016-11-01

    Rapid, on-the-spot diagnostic and monitoring systems are vital for the survival of patients with intracranial hematoma, as their conditions drastically deteriorate with time. To address the limited accessibility, high costs and static structure of currently used MRI and CT scanners, a portable non-invasive multi-slice microwave imaging system is presented for accurate 3D localization of hematoma inside human head. This diagnostic system provides fast data acquisition and imaging compared to the existing systems by means of a compact array of low-profile, unidirectional antennas with wideband operation. The 3D printed low-cost and portable system can be installed in an ambulance for rapid on-site diagnosis by paramedics. In this paper, the multi-slice head imaging system’s operating principle is numerically analysed and experimentally validated on realistic head phantoms. Quantitative analyses demonstrate that the multi-slice head imaging system is able to generate better quality reconstructed images providing 70% higher average signal to clutter ratio, 25% enhanced maximum signal to clutter ratio and with around 60% hematoma target localization compared to the previous head imaging systems. Nevertheless, numerical and experimental results demonstrate that previous reported 2D imaging systems are vulnerable to localization error, which is overcome in the presented multi-slice 3D imaging system. The non-ionizing system, which uses safe levels of very low microwave power, is also tested on human subjects. Results of realistic phantom and subjects demonstrate the feasibility of the system in future preclinical trials.

  5. Determination of rice canopy growth based on high resolution satellite images: a case study using RapidEye imagery in Korea

    Directory of Open Access Journals (Sweden)

    Mijeong Kim

    2016-10-01

    Full Text Available Processing to correct atmospheric effects and classify all constituent pixels in a remote sensing image is required before the image is used to monitor plant growth. The raw image contains artifacts due to atmospheric conditions at the time of acquisition. This study sought to distinguish the canopy growth of paddy rice using RapidEye (BlackBridge, Berlin, Germany satellite data and investigate practical image correction and classification methods. The RapidEye images were taken over experimental fields of paddy rice at Chonnam National University (CNU, Gwangju, and at TaeAn, Choongcheongnam-do, Korea. The CNU RapidEye images were used to evaluate the atmospheric correction methods. Atmospheric correction of the RapidEye images was performed using three different methods, QUick Atmospheric Correction (QUAC, Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH, and Atmospheric and Topographic Correction (ATCOR. To minimize errors in utilizing observed growth and yield estimation of paddy rice, the paddy fields were classified using a supervised classification method and normalized difference vegetation index (NDVI thresholds, using the NDVI time-series features of the paddy fields. The results of the atmospheric correction using ATCOR on the satellite images were favorable, which correspond to those from reference UAV images. Meanwhile, the classification method using the NDVI threshold accurately classified the same pixels from each of the time-series images. We have demonstrated that the image correction and classification methods investigated here should be applicable to high resolution satellite images used in monitoring other crop growth conditions.

  6. A rapid method for creating qualitative images indicative of thick oil emulsion on the ocean's surface from imaging spectrometer data

    Science.gov (United States)

    Kokaly, Raymond F.; Hoefen, Todd M.; Livo, K. Eric; Swayze, Gregg A.; Leifer, Ira; McCubbin, Ian B.; Eastwood, Michael L.; Green, Robert O.; Lundeen, Sarah R.; Sarture, Charles M.; Steele, Denis; Ryan, Thomas; Bradley, Eliza S.; Roberts, Dar A.; ,

    2010-01-01

    This report describes a method to create color-composite images indicative of thick oil:water emulsions on the surface of clear, deep ocean water by using normalized difference ratios derived from remotely sensed data collected by an imaging spectrometer. The spectral bands used in the normalized difference ratios are located in wavelength regions where the spectra of thick oil:water emulsions on the ocean's surface have a distinct shape compared to clear water and clouds. In contrast to quantitative analyses, which require rigorous conversion to reflectance, the method described is easily computed and can be applied rapidly to radiance data or data that have been atmospherically corrected or ground-calibrated to reflectance. Examples are shown of the method applied to Airborne Visible/Infrared Imaging Spectrometer data collected May 17 and May 19, 2010, over the oil spill from the Deepwater Horizon offshore oil drilling platform in the Gulf of Mexico.

  7. Rapid imaging of free radicals in vivo using hybrid FISP field-cycled PEDRI

    Science.gov (United States)

    Youngdee, Wiwat; Lurie, David J.; Foster, Margaret A.

    2002-04-01

    A new pulse sequence for rapid imaging of free radicals is presented which combines snapshot imaging methods and conventional field-cycled proton electron double resonance imaging (FC-PEDRI). The new sequence allows the number of EPR irradiation periods to be optimized to obtain an acceptable SNR and spatial resolution of free radical distribution in the final image while reducing the RF power deposition and increasing the temporal resolution. Centric reordered phase encoding has been employed to counter the problem of rapid decay of the Overhauser-enhanced signal. A phase-correction scheme has also been used to correct problems arising from instability of the magnetic field following field-cycling. In vivo experiments were carried out using triaryl methyl free radical contrast agent, injected at a dose of 0.214 mmol kg-1 body weight in anaesthetized adult male Sprague-Dawley rats. Transaxial images through the abdomen were collected using 1, 2, 4 and 8 EPR irradiation periods. Using 4 EPR irradiation periods it was possible to generate free radical distributions of acceptable SNR and resolution. The EPR power deposition is reduced by a factor of 16 and the acquisition time is reduced by a factor of 4 compared to an acquisition using the conventional FC-PEDRI pulse sequence.

  8. Quaternion Fourier transforms for signal and image processing

    CERN Document Server

    Ell, Todd A; Sangwine, Stephen J

    2014-01-01

    Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.

  9. Sub-image data processing in Astro-WISE

    NARCIS (Netherlands)

    Mwebaze, Johnson; Boxhoorn, Danny; McFarland, John; Valentijn, Edwin A.

    Most often, astronomers are interested in a source (e.g., moving, variable, or extreme in some colour index) that lies on a few pixels of an image. However, the classical approach in astronomical data processing is the processing of the entire image or set of images even when the sole source of

  10. A rapid method for counting nucleated erythrocytes on stained blood smears by digital image analysis

    Science.gov (United States)

    Gering, E.; Atkinson, C.T.

    2004-01-01

    Measures of parasitemia by intraerythrocytic hematozoan parasites are normally expressed as the number of infected erythrocytes per n erythrocytes and are notoriously tedious and time consuming to measure. We describe a protocol for generating rapid counts of nucleated erythrocytes from digital micrographs of thin blood smears that can be used to estimate intensity of hematozoan infections in nonmammalian vertebrate hosts. This method takes advantage of the bold contrast and relatively uniform size and morphology of erythrocyte nuclei on Giemsa-stained blood smears and uses ImageJ, a java-based image analysis program developed at the U.S. National Institutes of Health and available on the internet, to recognize and count these nuclei. This technique makes feasible rapid and accurate counts of total erythrocytes in large numbers of microscope fields, which can be used in the calculation of peripheral parasitemias in low-intensity infections.

  11. From acoustic segmentation to language processing: evidence from optical imaging

    Directory of Open Access Journals (Sweden)

    Hellmuth Obrig

    2010-06-01

    Full Text Available During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use ‘anchors’ to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological or suprasegmental (e.g., prosodic level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left hemispheric dominance for segmental and a right hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, ‘guide’ the lateralization process. Methodologically, fMRI provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development.

  12. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  13. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan

    2015-04-01

    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  14. Implicit affectivity and rapid processing of affective body language: An fMRI study.

    Science.gov (United States)

    Suslow, Thomas; Ihme, Klas; Quirin, Markus; Lichev, Vladimir; Rosenberg, Nicole; Bauer, Jochen; Bomberg, Luise; Kersting, Anette; Hoffmann, Karl-Titus; Lobsien, Donald

    2015-10-01

    Previous research has revealed affect-congruity effects for the recognition of affects from faces. Little is known about the impact of affect on the perception of body language. The aim of the present study was to investigate the relationship of implicit (versus explicit) affectivity with the recognition of briefly presented affective body expressions. Implicit affectivity, which can be measured using indirect assessment methods, has been found to be more predictive of spontaneous physiological reactions than explicit (self-reported) affect. Thirty-four healthy women had to label the expression of body postures (angry, fearful, happy, or neutral) presented for 66 ms and masked by a neutral body posture in a forced-choice format while undergoing functional magnetic resonance imaging (fMRI). Participants' implicit affectivity was assessed using the Implicit Positive and Negative Affect Test. Measures of explicit state and trait affectivity were also administered. Analysis of the fMRI data was focused on a subcortical network involved in the rapid perception of affective body expressions. Only implicit negative affect (but not explicit affect) was correlated with correct labeling performance for angry body posture. As expected, implicit negative affect was positively associated with activation of the subcortical network in response to fearful and angry expression (compared to neutral expression). Responses of the caudate nucleus to affective body expression were especially associated with its recognition. It appears that processes of rapid recognition of affects from body postures could be facilitated by an individual's implicit negative affect. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  15. A rapid and robust gradient measurement technique using dynamic single-point imaging.

    Science.gov (United States)

    Jang, Hyungseok; McMillan, Alan B

    2017-09-01

    We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  16. Interactive image processing for mobile devices

    Science.gov (United States)

    Shaw, Rodney

    2009-01-01

    As the number of consumer digital images escalates by tens of billions each year, an increasing proportion of these images are being acquired using the latest generations of sophisticated mobile devices. The characteristics of the cameras embedded in these devices now yield image-quality outcomes that approach those of the parallel generations of conventional digital cameras, and all aspects of the management and optimization of these vast new image-populations become of utmost importance in providing ultimate consumer satisfaction. However this satisfaction is still limited by the fact that a substantial proportion of all images are perceived to have inadequate image quality, and a lesser proportion of these to be completely unacceptable (for sharing, archiving, printing, etc). In past years at this same conference, the author has described various aspects of a consumer digital-image interface based entirely on an intuitive image-choice-only operation. Demonstrations have been given of this facility in operation, essentially allowing criticalpath navigation through approximately a million possible image-quality states within a matter of seconds. This was made possible by the definition of a set of orthogonal image vectors, and defining all excursions in terms of a fixed linear visual-pixel model, independent of the image attribute. During recent months this methodology has been extended to yield specific user-interactive image-quality solutions in the form of custom software, which at less than 100kb is readily embedded in the latest generations of unlocked portable devices. This has also necessitated the design of new user-interfaces and controls, as well as streamlined and more intuitive versions of the user quality-choice hierarchy. The technical challenges and details will be described for these modified versions of the enhancement methodology, and initial practical experience with typical images will be described.

  17. Multiscale image processing and antiscatter grids in digital radiography.

    Science.gov (United States)

    Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D

    2009-01-01

    Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.

  18. Image processing and enhancement provided by commercial dental software programs

    National Research Council Canada - National Science Library

    Lehmann, T M; Troeltsch, E; Spitzer, K

    2002-01-01

    To identify and analyse methods/algorithms for image processing provided by various commercial software programs used in direct digital dental imaging and to map them onto a standardized nomenclature...

  19. Video image processing to create a speed sensor

    Science.gov (United States)

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  20. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  1. Evaluation of Novel Process Indicators for Rapid Monitoring of Hydrogen Peroxide Decontamination Processes.

    Science.gov (United States)

    McLeod, N P; Clifford, M; Sutton, J M

    2017-01-01

    measurement can be achieved within a few minutes of vapour-phase hydrogen peroxide cycle completion, compared with a minimum of 7 days for the evaluation of biological indicator growth, this offers a potentially valuable tool for rapid vapour-phase hydrogen peroxide bio-decontamination cycle development and subsequent re-qualification.LAY ABSTRACT: Pharmaceutical product manufacture is performed in controlled cleanroom and closed chamber environments (isolators) to reduce the risk of contamination. These environments undergo regular decontamination to control microbial contamination levels, using a range of methods, one of which is to vaporize hydrogen peroxide (a chemical disinfectant) into a gas or an aerosol and disperse it throughout the environment, killing any microorganisms present. Biological indicators, which consist of a small steel coupon carrying a population of bacterial spores that are more resistant to hydrogen peroxide than are most microorganisms, are placed within the environment, and then tested for growth following treatment to ensure the process was effective. Confirmation of growth/no growth (and therefore hydrogen peroxide cycle efficacy) can take up to 7 days, which significantly increases time and cost of developing and confirming cycle efficacy. This study tests whether a new technology which uses a robust enzyme, thermostable adenylate kinase, could be used to predict biological indicator growth. The study shows this method can be used to confirm hydrogen peroxide cycle efficacy, by predicting whether the BI is killed at a specific time point or not and results are obtained in a few minutes rather than 7 days. This potentially offers significant time and cost benefits. © PDA, Inc. 2017.

  2. Method development for verification the completeancient statues by image processing

    OpenAIRE

    Natthariya Laopracha; Umaporn Saisangjan; Rapeeporn Chamchong

    2015-01-01

    Ancient statues are cultural heritages that should be preserved and maintained. Nevertheless, such invaluable statues may be targeted by vandalism or burglary. In order to guard these statues by using image processing, this research aims to develop a technique for detecting images of ancient statues with missing parts using digital image processing. This paper proposed the effective feature extraction method for detecting images of damaged statues or statues with missing parts based on the Hi...

  3. Framework for hyperspectral image processing and quantification for cancer detection during animal tumor surgery

    Science.gov (United States)

    Lu, Guolan; Wang, Dongsheng; Qin, Xulei; Halig, Luma; Muller, Susan; Zhang, Hongzheng; Chen, Amy; Pogue, Brian W.; Chen, Zhuo Georgia; Fei, Baowei

    2015-12-01

    Hyperspectral imaging (HSI) is an imaging modality that holds strong potential for rapid cancer detection during image-guided surgery. But the data from HSI often needs to be processed appropriately in order to extract the maximum useful information that differentiates cancer from normal tissue. We proposed a framework for hyperspectral image processing and quantification, which includes a set of steps including image preprocessing, glare removal, feature extraction, and ultimately image classification. The framework has been tested on images from mice with head and neck cancer, using spectra from 450- to 900-nm wavelength. The image analysis computed Fourier coefficients, normalized reflectance, mean, and spectral derivatives for improved accuracy. The experimental results demonstrated the feasibility of the hyperspectral image processing and quantification framework for cancer detection during animal tumor surgery, in a challenging setting where sensitivity can be low due to a modest number of features present, but potential for fast image classification can be high. This HSI approach may have potential application in tumor margin assessment during image-guided surgery, where speed of assessment may be the dominant factor.

  4. Framework for hyperspectral image processing and quantification for cancer detection during animal tumor surgery.

    Science.gov (United States)

    Lu, Guolan; Wang, Dongsheng; Qin, Xulei; Halig, Luma; Muller, Susan; Zhang, Hongzheng; Chen, Amy; Pogue, Brian W; Chen, Zhuo Georgia; Fei, Baowei

    2015-01-01

    Hyperspectral imaging (HSI) is an imaging modality that holds strong potential for rapid cancer detection during image-guided surgery. But the data from HSI often needs to be processed appropriately in order to extract the maximum useful information that differentiates cancer from normal tissue. We proposed a framework for hyperspectral image processing and quantification, which includes a set of steps including image preprocessing, glare removal, feature extraction, and ultimately image classification. The framework has been tested on images from mice with head and neck cancer, using spectra from 450- to 900-nm wavelength. The image analysis computed Fourier coefficients, normalized reflectance, mean, and spectral derivatives for improved accuracy. The experimental results demonstrated the feasibility of the hyperspectral image processing and quantification framework for cancer detection during animal tumor surgery, in a challenging setting where sensitivity can be low due to a modest number of features present, but potential for fast image classification can be high. This HSI approach may have potential application in tumor margin assessment during image-guided surgery, where speed of assessment may be the dominant factor.

  5. Investigating the Feasibility of Rapid MRI for Image-Guided Motion Management in Lung Cancer Radiotherapy

    Directory of Open Access Journals (Sweden)

    Amit Sawant

    2014-01-01

    Full Text Available Cycle-to-cycle variations in respiratory motion can cause significant geometric and dosimetric errors in the administration of lung cancer radiation therapy. A common limitation of the current strategies for motion management is that they assume a constant, reproducible respiratory cycle. In this work, we investigate the feasibility of using rapid MRI for providing long-term imaging of the thorax in order to better capture cycle-to-cycle variations. Two nonsmall-cell lung cancer patients were imaged (free-breathing, no extrinsic contrast, and 1.5 T scanner. A balanced steady-state-free-precession (b-SSFP sequence was used to acquire cine-2D and cine-3D (4D images. In the case of Patient 1 (right midlobe lesion, ~40 mm diameter, tumor motion was well correlated with diaphragmatic motion. In the case of Patient 2, (left upper-lobe lesion, ~60 mm diameter, tumor motion was poorly correlated with diaphragmatic motion. Furthermore, the motion of the tumor centroid was poorly correlated with the motion of individual points on the tumor boundary, indicating significant rotation and/or deformation. These studies indicate that image quality and acquisition speed of cine-2D MRI were adequate for motion monitoring. However, significant improvements are required to achieve comparable speeds for truly 4D MRI. Despite several challenges, rapid MRI offers a feasible and attractive tool for noninvasive, long-term motion monitoring.

  6. Hyperspectral Imaging as a Rapid Quality Control Method for Herbal Tea Blends

    Directory of Open Access Journals (Sweden)

    Majolie Djokam

    2017-03-01

    Full Text Available In South Africa, indigenous herbal teas are enjoyed due to their distinct taste and aroma. The acclaimed health benefits of herbal teas include the management of chronic diseases such as hypertension and diabetes. Quality control of herbal teas has become important due to the availability of different brands of varying quality and the production of tea blends. The potential of hyperspectral imaging as a rapid quality control method for herbal tea blends from rooibos (Aspalathus linearis, honeybush (Cyclopia intermedia, buchu (Agathosma Betulina and cancerbush (Sutherlandia frutescens was investigated. Hyperspectral images of raw materials and intact tea bags were acquired using a sisuChema shortwave infrared (SWIR hyperspectral pushbroom imaging system (920–2514 nm. Principal component analysis (PCA plots showed clear discrimination between raw materials. Partial least squares discriminant analysis (PLS-DA models correctly predicted the raw material constituents of each blend and accurately determined the relative proportions. The results were corroborated independently using ultra-high performance liquid chromatography coupled to mass spectrometry (UHPLC-MS. This study demonstrated the application of hyperspectral imaging coupled with chemometric modelling as a reliable, rapid and non-destructive quality control method for authenticating herbal tea blends and to determine relative proportions in a tea bag.

  7. Histopathological Image Analysis Using Image Processing Techniques: An Overview

    OpenAIRE

    A. D. Belsare; M.M. Mushrif

    2012-01-01

    This paper reviews computer assisted histopathology image analysis for cancer detection and classification. Histopathology refers to the examination of invasive or less invasive biopsy sample by a pathologist under microscope for locating, analyzing and classifying most of the diseases like cancer. The analysis of histoapthological image is done manually by the pathologist to detect disease which leads to subjective diagnosis of sample and varies with level of expertise of examine...

  8. Effects of image processing on the detective quantum efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na [Yonsei University, Wonju (Korea, Republic of)

    2010-02-15

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  9. Viking image processing. [digital stereo imagery and computer mosaicking

    Science.gov (United States)

    Green, W. B.

    1977-01-01

    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  10. Image processing and analysis with graphs theory and practice

    CERN Document Server

    Lézoray, Olivier

    2012-01-01

    Covering the theoretical aspects of image processing and analysis through the use of graphs in the representation and analysis of objects, Image Processing and Analysis with Graphs: Theory and Practice also demonstrates how these concepts are indispensible for the design of cutting-edge solutions for real-world applications. Explores new applications in computational photography, image and video processing, computer graphics, recognition, medical and biomedical imaging With the explosive growth in image production, in everything from digital photographs to medical scans, there has been a drast

  11. FunImageJ: a Lisp framework for scientific image processing.

    Science.gov (United States)

    Harrington, Kyle I S; Rueden, Curtis T; Eliceiri, Kevin W

    2017-11-02

    FunImageJ is a Lisp framework for scientific image processing built upon the ImageJ software ecosystem. The framework provides a natural functional-style for programming, while accounting for the performance requirements necessary in big data processing commonly encountered in biological image analysis. Freely available plugin to Fiji (http://fiji.sc/#download). Installation and use instructions available at (http://imagej.net/FunImageJ). kharrington@uidaho.edu. Supplementary data are available at Bioinformatics online.

  12. The use of semiconductor processes for the design and characterization of a rapid thermal processor

    Science.gov (United States)

    Hodul, David; Metha, Sandeep

    1989-02-01

    While a variety of tools are used for the design of semiconductor equipment, processes themselves are often the most informative, providing a convenient and direct method for improving both the hardware and the process. Experiments have been done to evaluate the performance of a commercial rapid thermal processing (RTP) instrument. The effects of chamber geometry and optics as well as the time/temperature recipe were probed using rapid thermal oxidation (RTO), sintering of tungsten suicide and molybdenum silicide, and partial activation of implanted ions. We show how sheet resistance and film thickness maps can be used to determine the dynamic and static temperature uniformity.

  13. Adaptive multiparameter control: application to a Rapid Thermal Processing process; Commande Adaptative Multivariable: Application a un Procede de Traitement Thermique Rapide

    Energy Technology Data Exchange (ETDEWEB)

    Morales Mago, S.J.

    1995-12-20

    In this work the problem of temperature uniformity control in rapid thermal processing is addressed by means of multivariable adaptive control. Rapid Thermal Processing (RTP) is a set of techniques proposed for semiconductor fabrication processes such as annealing, oxidation, chemical vapour deposition and others. The product quality depends on two mains issues: precise trajectory following and spatial temperature uniformity. RTP is a fabrication technique that requires a sophisticated real-time multivariable control system to achieve acceptable results. Modelling of the thermal behaviour of the process leads to very complex mathematical models. These are the reasons why adaptive control techniques are chosen. A multivariable linear discrete time model of the highly non-linear process is identified on-line, using an identification scheme which includes supervisory actions. This identified model, combined with a multivariable predictive control law allows to prevent the controller from systems variations. The control laws are obtained by minimization of a quadratic cost function or by pole placement. In some of these control laws, a partial state reference model was included. This reference model allows to incorporate an appropriate tracking capability into the control law. Experimental results of the application of the involved multivariable adaptive control laws on a RTP system are presented. (author) refs

  14. Survey on Neural Networks Used for Medical Image Processing.

    Science.gov (United States)

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-02-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques.

  15. ReagentTF: a rapid and versatile optical clearing method for biological imaging(Conference Presentation)

    Science.gov (United States)

    Yu, Tingting; Zhu, Jingtan; Li, Yusha; Qi, Yisong; Xu, Jianyi; Gong, Hui; Luo, Qingming; Zhu, Dan

    2017-02-01

    The emergence of various optical clearing methods provides a great potential for imaging deep inside tissues by combining with multiple-labelling and microscopic imaging techniques. They were generally developed for specific imaging demand thus presented some non-negligible limitations such as long incubation time, tissue deformation, fluorescence quenching, incompatibility with immunostaining or lipophilic tracers. In this study, we developed a rapid and versatile clearing method, termed ReagentTF, for deep imaging of various fluorescent samples. This method can not only efficiently clear embryos, neonatal whole-brains and adult thick brain sections by simple immersion in aqueous mixtures with minimal volume change, but also can preserve fluorescence of various fluorescent proteins and simultaneously be compatible with immunostaining and lipophilic neuronal dyes. We demonstrate the effectiveness of this method in reconstructing the cell distributions of mouse hippocampus, visualizing the neural projection from CA1 (Cornu Ammonis 1) to HDB (nucleus of the horizontal limb of the diagonal band), and observing the growth of forelimb plexus in whole-mount embryos. These results suggest that ReagentTF is useful for large-volume imaging and will be an option for the deep imaging of biological tissues.

  16. Image Processing Based Signature Verification Technique to Reduce Fraud in Financial Institutions

    Directory of Open Access Journals (Sweden)

    Hussein Walid

    2016-01-01

    Full Text Available Handwritten signature is broadly utilized as personal verification in financial institutions ensures the necessity for a robust automatic signature verification tool. This tool aims to reduce fraud in all related financial transactions’ sectors. This paper proposes an online, robust, and automatic signature verification technique using the recent advances in image processing and machine learning. Once the image of a handwritten signature for a customer is captured, several pre-processing steps are performed on it including filtration and detection of the signature edges. Afterwards, a feature extraction process is applied on the image to extract Speeded up Robust Features (SURF and Scale-Invariant Feature Transform (SIFT features. Finally, a verification process is developed and applied to compare the extracted image features with those stored in the database for the specified customer. Results indicate high accuracy, simplicity, and rapidity of the developed technique, which are the main criteria to judge a signature verification tool in banking and other financial institutions.

  17. Medical image processing on the GPU - past, present and future.

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Survey on Neural Networks Used for Medical Image Processing

    OpenAIRE

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-01-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) Wh...

  19. Application of image processing technology in yarn hairiness detection

    OpenAIRE

    Zhang, Guohong; Binjie XIN

    2016-01-01

    Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is...

  20. Optimizing signal and image processing applications using Intel libraries

    Science.gov (United States)

    Landré, Jérôme; Truchetet, Frédéric

    2007-01-01

    This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.

  1. Multimodal imaging documentation of rapid evolution of retinal changes in handheld laser-induced maculopathy.

    Science.gov (United States)

    Dhrami-Gavazi, Elona; Lee, Winston; Balaratnasingam, Chandrakumar; Kayserman, Larisa; Yannuzzi, Lawrence A; Freund, K Bailey

    2015-01-01

    To use multimodal imaging to document the relatively rapid clinical evolution of handheld laser-induced maculopathy (HLIM). To demonstrate that inadvertent ocular injury can result from devices mislabeled with respect to their power specifications. The clinical course of a 17-year-old male who sustained self-inflicted, central macular damage from a 20-25 s direct stare at a red-spectrum, handheld laser pointer ordered from an internet retailer is provided. Retrospective review of multimodal imaging that includes fundus photography, fluorescein angiography, MultiColor reflectance, eye-tracked spectral domain optical coherence tomography (SD-OCT), fundus autofluorescence, and microperimetry is used to describe the evolving clinical manifestations of HLIM in the first 3 months. Curvilinear bands of dense hyperreflectivity extending from the outer retina and following the Henle fibers were seen on SD-OCT immediately after injury. This characteristic appearance had largely resolved by 2 weeks. There was significant non-uniformity in the morphological characteristics of HLIM lesions between autofluorescence and reflectance images. The pattern of lesion evolution was also significantly different between imaging modalities. Analysis of the laser device showed its wavelength to be correctly listed, but the power was found to be 102.5-105 mW, as opposed to the laser -induced maculopathy, this finding can undergo rapid resolution in the span of several days. In the absence of this finding, other multimodal imaging clues and a careful history may aid in recognizing this diagnosis. A greater awareness regarding inaccurate labeling on some of these devices could help reduce the frequency of this preventable entity.

  2. Brain activity-based image classification from rapid serial visual presentation.

    Science.gov (United States)

    Bigdely-Shamlo, Nima; Vankov, Andrey; Ramirez, Rey R; Makeig, Scott

    2008-10-01

    We report the design and performance of a brain-computer interface (BCI) system for real-time single-trial binary classification of viewed images based on participant-specific dynamic brain response signatures in high-density (128-channel) electroencephalographic (EEG) data acquired during a rapid serial visual presentation (RSVP) task. Image clips were selected from a broad area image and presented in rapid succession (12/s) in 4.1-s bursts. Participants indicated by subsequent button press whether or not each burst of images included a target airplane feature. Image clip creation and search path selection were designed to maximize user comfort and maintain user awareness of spatial context. Independent component analysis (ICA) was used to extract a set of independent source time-courses and their minimally-redundant low-dimensional informative features in the time and time-frequency amplitude domains from 128-channel EEG data recorded during clip burst presentations in a training session. The naive Bayes fusion of two Fisher discriminant classifiers, computed from the 100 most discriminative time and time-frequency features, respectively, was used to estimate the likelihood that each clip contained a target feature. This estimator was applied online in a subsequent test session. Across eight training/test session pairs from seven participants, median area under the receiver operator characteristic curve, by tenfold cross validation, was 0.97 for within-session and 0.87 for between-session estimates, and was nearly as high (0.83) for targets presented in bursts that participants mistakenly reported to include no target features.

  3. Rapid intracerebroventricular delivery of Cu-DOTA-etanercept after peripheral administration demonstrated by PET imaging

    Directory of Open Access Journals (Sweden)

    Chen Xiaoyuan

    2009-02-01

    Full Text Available Abstract Background The cytokines interleukin-1 and tumor necrosis factor (TNF, and the cytokine blocker interleukin-1 receptor antagonist, all have been demonstrated to enter the cerebrospinal fluid (CSF following peripheral administration. Recent reports of rapid clinical improvement in patients with Alzheimer's disease and related forms of dementia following perispinal administration of etanercept, a TNF antagonist, suggest that etanercept also has the ability to reach the brain CSF. To investigate, etanercept was labeled with a positron emitter to enable visualization of its intracranial distribution following peripheral administration by PET in an animal model. Findings Radiolabeling of etanercept with the PET emitter 64Cu was performed by DOTA (1,4,7,10-tetraazadodecane-N,N',N",N"'-tetraacetic acid conjugation of etanercept, followed by column purification and 64Cu labeling. MicroPET imaging revealed accumulation of 64Cu-DOTA-etanercept within the lateral and third cerebral ventricles within minutes of peripheral perispinal administration in a normal rat anesthesized with isoflurane anesthesia, with concentration within the choroid plexus and into the CSF. Conclusion Synthesis of 64Cu-DOTA-etanercept enabled visualization of its intracranial distribution by microPET imaging. MicroPET imaging documented rapid accumulation of 64Cu-DOTA-etanercept within the choroid plexus and the cerebrospinal fluid within the cerebral ventricles of a living rat after peripheral administration. Further study of the effects of etanercept and TNF at the level of the choroid plexus may yield valuable insights into the pathogenesis of Alzheimer's disease.

  4. Sliding mean edge estimation. [in digital image processing

    Science.gov (United States)

    Ford, G. E.

    1978-01-01

    A method for determining the locations of the major edges of objects in digital images is presented. The method is based on an algorithm utilizing maximum likelihood concepts. An image line-scan interval is processed to determine if an edge exists within the interval and its location. The proposed algorithm has demonstrated good results even in noisy images.

  5. Experiences with digital processing of images at INPE

    Science.gov (United States)

    Mascarenhas, N. D. A. (Principal Investigator)

    1984-01-01

    Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

  6. A color image processing pipeline for digital microscope

    Science.gov (United States)

    Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong

    2012-10-01

    Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

  7. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    Science.gov (United States)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  8. Rapid and Quantitative Assessment of Cancer Treatment Response Using In Vivo Bioluminescence Imaging

    Directory of Open Access Journals (Sweden)

    Alnawaz Rehemtulla

    2000-01-01

    Full Text Available Current assessment of orthotopic tumor models in animals utilizes survival as the primary therapeutic end point. In vivo bioluminescence imaging (BLI is a sensitive imaging modality that is rapid and accessible, and may comprise an ideal tool for evaluating antineoplastic therapies [1 ]. Using human tumor cell lines constitutively expressing luciferase, the kinetics of tumor growth and response to therapy have been assessed in intraperitoneal [2], subcutaneous, and intravascular [3] cancer models. However, use of this approach for evaluating orthotopic tumor models has not been demonstrated. In this report, the ability of BLI to noninvasively quantitate the growth and therapeuticinduced cell kill of orthotopic rat brain tumors derived from 9L gliosarcoma cells genetically engineered to stably express firefly luciferase (9LLuc was investigated. Intracerebral tumor burden was monitored over time by quantitation of photon emission and tumor volume using a cryogenically cooled CCD camera and magnetic resonance imaging (MRI, respectively. There was excellent correlation (r=0.91 between detected photons and tumor volume. A quantitative comparison of tumor cell kill determined from serial MRI volume measurements and BLI photon counts following 1,3-bis(2-chloroethyl-1-nitrosourea (BCNU treatment revealed that both imaging modalities yielded statistically similar cell kill values (P=.951. These results provide direct validation of BLI imaging as a powerful and quantitative tool for the assessment of antineoplastic therapies in living animals.

  9. Computationally rapid method of estimating signal-to-noise ratio for phased array image reconstructions.

    Science.gov (United States)

    Wiens, Curtis N; Kisch, Shawn J; Willig-Onwuachi, Jacob D; McKenzie, Charles A

    2011-10-01

    Measuring signal-to-noise ratio (SNR) for parallel MRI reconstructions is difficult due to spatially dependent noise amplification. Existing approaches for measuring parallel MRI SNR are limited because they are not applicable to all reconstructions, require significant computation time, or rely on repeated image acquisitions. A new SNR estimation approach is proposed, a hybrid of the repeated image acquisitions method detailed in the National Electrical Manufacturers Association (NEMA) standard and the Monte Carlo based pseudo-multiple replica method, in which the difference between images reconstructed from the unaltered acquired data and that same data reconstructed after the addition of calibrated pseudo-noise is used to estimate the noise in the parallel MRI image reconstruction. This new noise estimation method can be used to rapidly compute the pixel-wise SNR of the image generated from any parallel MRI reconstruction of a single acquisition. SNR maps calculated with the new method are validated against existing SNR calculation techniques. Copyright © 2011 Wiley-Liss, Inc.

  10. Design of a tomato packing system by image processing and optimization processing

    Science.gov (United States)

    Li, K.; Kumazaki, T.; Saigusa, M.

    2016-02-01

    In recent years, with the development of environmental control systems in plant factories, tomato production has rapidly increased in Japan. However, with the decline in the availability of agricultural labor, there is a need to automate grading, sorting and packing operations. In this research, we designed an automatic packing program with which tomato weight could be estimated by image processing and that they were able to be packed in an optimized configuration. The weight was estimated by using the pixel area properties after an L*a*b* color model conversion, noise rejection, filling holes and boundary preprocessing. The packing optimization program was designed by a 0-1 knapsack algorithm for dynamic combinatorial optimization.

  11. [Rapid 2D-3D medical image registration based on CUDA].

    Science.gov (United States)

    Li, Lingzhi; Zou, Beiji

    2014-08-01

    The medical image registration between preoperative three-dimensional (3D) scan data and intraoperative two-dimensional (2D) image is a key technology in the surgical navigation. Most previous methods need to generate 2D digitally reconstructed radiographs (DRR) images from the 3D scan volume data, then use conventional image similarity function for comparison. This procedure includes a large amount of calculation and is difficult to archive real-time processing. In this paper, with using geometric feature and image density mixed characteristics, we proposed a new similarity measure function for fast 2D-3D registration of preoperative CT and intraoperative X-ray images. This algorithm is easy to implement, and the calculation process is very short, while the resulting registration accuracy can meet the clinical use. In addition, the entire calculation process is very suitable for highly parallel numerical calculation by using the algorithm based on CUDA hardware acceleration to satisfy the requirement of real-time application in surgery.

  12. Rapid and accurate tumor-target bio-imaging through specific in vivo biosynthesis of a fluorescent europium complex.

    Science.gov (United States)

    Ye, Jing; Wang, Jianling; Li, Qiwei; Dong, Xiawei; Ge, Wei; Chen, Yun; Jiang, Xuerui; Liu, Hongde; Jiang, Hui; Wang, Xuemei

    2016-04-01

    A new and facile method for rapidly and accurately achieving tumor targeting fluorescent images has been explored using a specifically biosynthesized europium (Eu) complex in vivo and in vitro. It demonstrated that a fluorescent Eu complex could be bio-synthesized through a spontaneous molecular process in cancerous cells and tumors, but not prepared in normal cells and tissues. In addition, the proteomics analyses show that some biological pathways of metabolism, especially for NADPH production and glutamine metabolism, are remarkably affected during the relevant biosynthesis process, where molecular precursors of europium ions are reduced to fluorescent europium complexes inside cancerous cells or tumor tissues. These results proved that the specific self-biosynthesis of a fluorescent Eu complex by cancer cells or tumor tissues can provide a new strategy for accurate diagnosis and treatment strategies in the early stages of cancers and thus is beneficial for realizing precise surgical intervention based on the relevant cheap and readily available agents.

  13. Breast image pre-processing for mammographic tissue segmentation.

    Science.gov (United States)

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer

    2015-12-01

    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Using quantum filters to process images of diffuse axonal injury

    Science.gov (United States)

    Pineda Osorio, Mateo

    2014-06-01

    Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

  15. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  16. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  17. Rapid and non-destructive assessment of polyunsaturated fatty acids contents in Salmon using near-infrared hyperspectral imaging

    Science.gov (United States)

    Tao, Feifei; Mba, Ogan; Liu, Li; Ngadi, Michael

    2017-04-01

    Polyunsaturated fatty acids (PUFAs) are important nutrients present in Salmon. However, current methods for quantifying the fatty acids (FAs) contents in foods are generally based on gas chromatography (GC) technique, which is time-consuming, laborious and destructive to the tested samples. Therefore, the capability of near-infrared (NIR) hyperspectral imaging to predict the PUFAs contents of C20:2 n-6, C20:3 n-6, C20:5 n-3, C22:5 n-3 and C22:6 n-3 in Salmon fillets in a rapid and non-destructive way was investigated in this work. Mean reflectance spectra were first extracted from the region of interests (ROIs), and then the spectral pre-processing methods of 2nd derivative and Savitzky-Golay (SG) smoothing were performed on the original spectra. Based on the original and the pre-processed spectra, PLSR technique was employed to develop the quantitative models for predicting each PUFA content in Salmon fillets. The results showed that for all the studied PUFAs, the quantitative models developed using the pre-processed reflectance spectra by "2nd derivative + SG smoothing" could improve their modeling results. Good prediction results were achieved with RP and RMSEP of 0.91 and 0.75 mg/g dry weight, 0.86 and 1.44 mg/g dry weight, 0.82 and 3.01 mg/g dry weight for C20:3 n-6, C22:5 n-3 and C20:5 n-3, respectively after pre-processing by "2nd derivative + SG smoothing". The work demonstrated that NIR hyperspectral imaging could be a useful tool for rapid and non-destructive determination of the PUFA contents in fish fillets.

  18. Image Processing on Morphological Traits of Grape Germplasm

    OpenAIRE

    Shiraishi, Mikio; Shiraishi, Shinichi; Kurushima, Takashi

    1994-01-01

    The methods of image processing of grape plants was developed to make the description of morphological traits more accurate and effective. A plant image was taken with a still video camera and displayed through a digital to analog conversion. A highquality image was obtained by 500 TV pieces as a horizontal resolution, and in particular, the degree of density of prostrate hairs between mature leaf veins (lower surface). The analog image was stored in an optical disk to preserve semipermanentl...

  19. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego

    2017-01-01

    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  20. [A novel image processing and analysis system for medical images based on IDL language].

    Science.gov (United States)

    Tang, Min

    2009-08-01

    Medical image processing and analysis system, which is of great value in medical research and clinical diagnosis, has been a focal field in recent years. Interactive data language (IDL) has a vast library of built-in math, statistics, image analysis and information processing routines, therefore, it has become an ideal software for interactive analysis and visualization of two-dimensional and three-dimensional scientific datasets. The methodology is proposed to design a novel image processing and analysis system for medical images based on IDL. There are five functional modules in this system: Image Preprocessing, Image Segmentation, Image Reconstruction, Image Measurement and Image Management. Experimental results demonstrate that this system is effective and efficient, and it has the advantages of extensive applicability, friendly interaction, convenient extension and favorable transplantation.

  1. Pyramidal Image-Processing Code For Hexagonal Grid

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1990-01-01

    Algorithm based on processing of information on intensities of picture elements arranged in regular hexagonal grid. Called "image pyramid" because image information at each processing level arranged in hexagonal grid having one-seventh number of picture elements of next lower processing level, each picture element derived from hexagonal set of seven nearest-neighbor picture elements in next lower level. At lowest level, fine-resolution of elements of original image. Designed to have some properties of image-coding scheme of primate visual cortex.

  2. The operation technology of realtime image processing system (Datacube)

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Lee, Yong Bum; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Park, Jin Seok

    1997-02-01

    In this project, a Sparc VME-based MaxSparc system, running the solaris operating environment, is selected as the dedicated image processing hardware for robot vision applications. In this report, the operation of Datacube maxSparc system, which is high performance realtime image processing hardware, is systematized. And image flow example programs for running MaxSparc system are studied and analyzed. The state-of-the-arts of Datacube system utilizations are studied and analyzed. For the next phase, advanced realtime image processing platform for robot vision application is going to be developed. (author). 19 refs., 71 figs., 11 tabs.

  3. Vacuum Switches Arc Images Pre–processing Based on MATLAB

    Directory of Open Access Journals (Sweden)

    Huajun Dong

    2015-01-01

    Full Text Available In order to filter out the noise effects of Vacuum Switches Arc(VSAimages, enhance the characteristic details of the VSA images, and improve the visual effects of VSA images, in this paper, the VSA images were implemented pre-processing such as noise removal, edge detection, processing of image’s pseudo color and false color, and morphological processing by MATLAB software. Furthermore, the morphological characteristics of the VSA images were extracted, including isopleths of the gray value, arc area and perimeter.

  4. IPL Processing of the Viking Orbiter Images of Mars

    Science.gov (United States)

    Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.

    1977-01-01

    The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.

  5. Monitoring Car Drivers' Condition Using Image Processing

    Science.gov (United States)

    Adachi, Kazumasa; Yamamto, Nozomi; Yamamoto, Osami; Nakano, Tomoaki; Yamamoto, Shin

    We have developed a car driver monitoring system for measuring drivers' consciousness, with which we aim to reduce car accidents caused by drowsiness of drivers. The system consists of the following three subsystems: an image capturing system with a pulsed infrared CCD camera, a system for detecting blinking waveform by the images using a neural network with which we can extract images of face and eye areas, and a system for measuring drivers' consciousness analyzing the waveform with a fuzzy inference technique and others. The third subsystem extracts three factors from the waveform first, and analyzed them with a statistical method, while our previous system used only one factor. Our experiments showed that the three-factor method we used this time was more effective to measure drivers' consciousness than the one-factor method we described in the previous paper. Moreover, the method is more suitable for fitting parameters of the system to each individual driver.

  6. Interactive Digital Image Processing Investigation. Phase II.

    Science.gov (United States)

    1980-04-01

    Information 7-81 7.7.2 ITRES Control Flow 7-85 7.7.3 Program Subroutine Description 7-87 7.7.3.1 Subroutine ACUSTS 7-87 7.7.3.2 Subroutine DSPMAPP 7-88... ACUSTS to accumulate statistics for total image DO for every field CALL ACUSTS to accumulate stats for field ENDDO ENDDO Calculate total image stats CALL...The subroutines developed for ITRES are described below: 1 7.7.3.1 Subroutine ACUSTS Purpose Accumulates field statistics Usage CALL ACUSTS (BUF

  7. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  8. The vision guidance and image processing of AGV

    Science.gov (United States)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  9. Detection of optimum maturity of maize using image processing and ...

    African Journals Online (AJOL)

    ... green colorations of the maize leaves at maturity was used. Different color features were extracted from the image processing system (MATLAB) and used as inputs to the artificial neural network that classify different levels of maturity. Keywords: Maize, Maturity, CCD Camera, Image Processing, Artificial Neural Network ...

  10. [Filing and processing systems of ultrasonic images in personal computers].

    Science.gov (United States)

    Filatov, I A; Bakhtin, D A; Orlov, A V

    1994-01-01

    The paper covers the software pattern for the ultrasonic image filing and processing system. The system records images on a computer display in real time or still, processes them by local filtration techniques, makes different measurements and stores the findings in the graphic database. It is stressed that the database should be implemented as a network version.

  11. Digital image processing for two-phase flow

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Young; Lim, Jae Yun [Cheju National University, Cheju (Korea, Republic of); No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1992-07-01

    A photographic method to measure the key parameters of two-phase flow is realized by using a digital image processing technique. The 8 bit gray level and 256 x 256 pixels are used to generates the image data which is treated to get the parameters of two-phase flow. It is observed that the key parameters could be identified by treating data obtained by the digital image processing technique.

  12. Surface Distresses Detection of Pavement Based on Digital Image Processing

    OpenAIRE

    Ouyang, Aiguo; Luo, Chagen; Zhou, Chao

    2010-01-01

    International audience; Pavement crack is the main form of early diseases of pavement. The use of digital photography to record pavement images and subsequent crack detection and classification has undergone continuous improvements over the past decade. Digital image processing has been applied to detect the pavement crack for its advantages of large amount of information and automatic detection. The applications of digital image processing in pavement crack detection, distresses classificati...

  13. IMAGE PROCESSING METHOD TO MEASURE SUGARCANE LEAF AREA

    OpenAIRE

    Sanjay B Patil; Dr Shrikant K Bodhe

    2011-01-01

    In order to increase the average sugarcane yield per acres with minimum cost farmers are adapting precision farming technique. This paper includes the area measurement of sugarcane leaf based on image processing method which is useful for plants growth monitoring, to analyze fertilizer deficiency and environmental stress,to measure diseases severity. In image processing method leaf area is calculated through pixel number statistic. Unit pixel in the same digital images represent the same size...

  14. Future trends in image processing software and hardware

    Science.gov (United States)

    Green, W. B.

    1979-01-01

    JPL image processing applications are examined, considering future trends in fields such as planetary exploration, electronics, astronomy, computers, and Landsat. Attention is given to adaptive search and interrogation of large image data bases, the display of multispectral imagery recorded in many spectral channels, merging data acquired by a variety of sensors, and developing custom large scale integrated chips for high speed intelligent image processing user stations and future pipeline production processors.

  15. Optimization of an on-board imaging system for extremely rapid radiation therapy

    Science.gov (United States)

    Cherry Kemmerling, Erica M.; Wu, Meng; Yang, He; Maxim, Peter G.; Loo, Billy W.; Fahrig, Rebecca

    2015-01-01

    Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors are proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration

  16. MR Diffusion Tensor Imaging Detects Rapid Microstructural Changes in Amygdala and Hippocampus Following Fear Conditioning in Mice

    Science.gov (United States)

    Ding, Abby Y.; Li, Qi; Zhou, Iris Y.; Ma, Samantha J.; Tong, Gehua; McAlonan, Grainne M.; Wu, Ed X.

    2013-01-01

    Background Following fear conditioning (FC), ex vivo evidence suggests that early dynamics of cellular and molecular plasticity in amygdala and hippocampal circuits mediate responses to fear. Such altered dynamics in fear circuits are thought to be etiologically related to anxiety disorders including posttraumatic stress disorder (PTSD). Consistent with this, neuroimaging studies of individuals with established PTSD in the months after trauma have revealed changes in brain regions responsible for processing fear. However, whether early changes in fear circuits can be captured in vivo is not known. Methods We hypothesized that in vivo magnetic resonance diffusion tensor imaging (DTI) would be sensitive to rapid microstructural changes elicited by FC in an experimental mouse PTSD model. We employed a repeated measures paired design to compare in vivo DTI measurements before, one hour after, and one day after FC-exposed mice (n = 18). Results Using voxel-wise repeated measures analysis, fractional anisotropy (FA) significantly increased then decreased in amygdala, decreased then increased in hippocampus, and was increasing in cingulum and adjacent gray matter one hour and one day post-FC respectively. These findings demonstrate that DTI is sensitive to early changes in brain microstructure following FC, and that FC elicits distinct, rapid in vivo responses in amygdala and hippocampus. Conclusions Our results indicate that DTI can detect rapid microstructural changes in brain regions known to mediate fear conditioning in vivo. DTI indices could be explored as a translational tool to capture potential early biological changes in individuals at risk for developing PTSD. PMID:23382811

  17. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization.

    Directory of Open Access Journals (Sweden)

    Daniel Kress

    Full Text Available Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones.

  18. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization

    Science.gov (United States)

    Kress, Daniel; van Bokhorst, Evelien; Lentink, David

    2015-01-01

    Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones. PMID:26107413

  19. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization.

    Science.gov (United States)

    Kress, Daniel; van Bokhorst, Evelien; Lentink, David

    2015-01-01

    Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones.

  20. Rapid non-contrast magnetic resonance imaging for post appendectomy intra-abdominal abscess in children

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Megan H. [Washington University School of Medicine in St. Louis, Mallinckrodt Institute of Radiology, St. Louis, MO (United States); Eutsler, Eric P.; Khanna, Geetika [Washington University School of Medicine in St. Louis, Pediatric Radiology, Mallinckrodt Institute of Radiology, St. Louis, MO (United States); Sheybani, Elizabeth F. [Mercy Hospital St. Louis, Department of Radiology, St. Louis, MO (United States)

    2017-07-15

    Acute appendicitis, especially if perforated at presentation, is often complicated by postoperative abscess formation. The detection of a postoperative abscess relies primarily on imaging. This has traditionally been done with contrast-enhanced computed tomography. Non-contrast magnetic resonance imaging (MRI) has the potential to accurately detect intra-abdominal abscesses, especially with the use of diffusion-weighted imaging (DWI). To evaluate our single-center experience with a rapid non-contrast MRI protocol evaluating post-appendectomy abscesses in children with persistent postsurgical symptoms. In this retrospective, institutional review board-approved study, all patients underwent a clinically indicated non-contrast 1.5- or 3-Tesla abdomen/pelvis MRI consisting of single-shot fast spin echo, inversion recovery and DWI sequences. All MRI studies were reviewed by two blinded pediatric radiologists to identify the presence of a drainable fluid collection. Each fluid collection was further characterized as accessible or not accessible for percutaneous or transrectal drainage. Imaging findings were compared to clinical outcome. Seven of the 15 patients had a clinically significant fluid collection, and 5 of these patients were treated with percutaneous drain placement or exploratory laparotomy. The other patients had a phlegmon or a clinically insignificant fluid collection and were discharged home within 48 h. Rapid non-contrast MRI utilizing fluid-sensitive and DWI sequences can be used to identify drainable fluid collections in post-appendectomy patients. This protocol can be used to triage patients between conservative management vs. abscess drainage without oral/intravenous contrast or exposure to ionizing radiation. (orig.)

  1. Large scale parallel document image processing

    NARCIS (Netherlands)

    van der Zant, Tijn; Schomaker, Lambert; Valentijn, Edwin; Yanikoglu, BA; Berkner, K

    2008-01-01

    Building a system which allows to search a very large database of document images. requires professionalization of hardware and software, e-science and web access. In astrophysics there is ample experience dealing with large data sets due to an increasing number of measurement instruments. The

  2. 8th International Image Processing and Communications Conference

    CERN Document Server

    2017-01-01

    This book collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts and presents the proceedings of the 8th International Image Processing and Communications Conference (IP&C 2016) held in Bydgoszcz, Poland September 7-9 2016. Part I deals with image processing. A comprehensive survey of different methods of image processing, computer vision is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered.

  3. Implementing full backtracking facilities for Prolog-based image processing

    Science.gov (United States)

    Jones, Andrew C.; Batchelor, Bruce G.

    1995-10-01

    PIP (Prolog image processing) is a system currently under development at UWCC, designed to support interactive image processing using the PROLOG programming language. In this paper we discuss Prolog-based image processing paradigms and present a meta-interpreter developed by the first author, designed to support an approach to image processing in PIP which is more in the spirit of Prolog than was previously possible. This meta-interpreter allows backtracking over image processing operations in a manner transparent to the programmer. Currently, for space-efficiency, the programmer needs to indicate over which operations the system may backtrack in a program; however, a number of extensions to the present work, including a more intelligent approach intended to obviate this need, are mentioned at the end of this paper, which the present meta-interpreter will provide a basis for investigating in the future.

  4. 6th International Image Processing and Communications Conference

    CERN Document Server

    2015-01-01

    This book collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts and presents the proceedings of the 6th International Image Processing and Communications Conference (IP&C 2014) held in Bydgoszcz, 10-12 September 2014. Part I deals with image processing. A comprehensive survey of different methods  of image processing, computer vision  is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered.

  5. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    Science.gov (United States)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  6. Full Parallax Integral 3D Display and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Byung-Gook Lee

    2015-02-01

    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  7. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  8. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Beyond Phonology: Visual Processes Predict Alphanumeric and Nonalphanumeric Rapid Naming in Poor Early Readers

    Science.gov (United States)

    Kruk, Richard S.; Luther Ruban, Cassia

    2018-01-01

    Visual processes in Grade 1 were examined for their predictive influences in nonalphanumeric and alphanumeric rapid naming (RAN) in 51 poor early and 69 typical readers. In a lagged design, children were followed longitudinally from Grade 1 to Grade 3 over 5 testing occasions. RAN outcomes in early Grade 2 were predicted by speeded and nonspeeded…

  10. Furnace for rapid thermal processing with optical switching film disposed between heater and reflector

    NARCIS (Netherlands)

    Roozeboom, F.; Duine, P.A.; Sluis, P. van der

    2000-01-01

    A furnace (1) for Rapid Thermal Processing of a wafer (7), characterized in that the wafer (7) is heated by lamps (9), and the heat radiation is reflected by an optical switching device (15,17) which is in the reflecting state during the heating stage. During the cooling stage of the wafer (7), the

  11. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    Science.gov (United States)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  12. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  13. Acquisition and Post-Processing of Immunohistochemical Images.

    Science.gov (United States)

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  14. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  15. Gaussian process interpolation for uncertainty estimation in image registration.

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods.

  16. Digital processing of stereoscopic image pairs.

    Science.gov (United States)

    Levine, M. D.

    1973-01-01

    The problem under consideration is concerned with scene analysis during robot navigation on the surface of Mars. In this mode, the world model of the robot must be continuously updated to include sightings of new obstacles and scientific samples. In order to describe the content of a particular scene, it is first necessary to segment it into known objects. One technique for accomplishing this segmentation is by analyzing the pair of images produced by the stereoscopic cameras mounted on the robot. A heuristic method is presented for determining the range for each point in the two-dimensional scene under consideration. The method is conceptually based on a comparison of corresponding points in the left and right images of the stereo pair. However, various heuristics which are adaptive in nature are used to make the algorithm both efficient and accurate. Examples are given of the use of this so-called range picture for the purpose of scene segmentation.

  17. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Gilani, Syed Irtiza Ali

    2008-09-15

    Recent technological developments in the field of magnetic resonance imaging have resulted in advanced techniques that can reduce the total time to acquire images. For applications such as relaxation time mapping, which enables improved visualisation of in vivo structures, rapid imaging techniques are highly desirable. TAPIR is a Look- Locker-based sequence for high-resolution, multislice T{sub 1} relaxation time mapping. Despite the high accuracy and precision of TAPIR, an improvement in the k-space sampling trajectory is desired to acquire data in clinically acceptable times. In this thesis, a new trajectory, termed line-sharing, is introduced for TAPIR that can potentially reduce the acquisition time by 40 %. Additionally, the line-sharing method was compared with the GRAPPA parallel imaging method. These methods were employed to reconstruct time-point images from the data acquired on a 4T high-field MR research scanner. Multislice, multipoint in vivo results obtained using these methods are presented. Despite improvement in acquisition speed, through line-sharing, for example, motion remains a problem and artefact-free data cannot always be obtained. Therefore, in this thesis, a rapid technique is introduced to estimate in-plane motion. The presented technique is based on calculating the in-plane motion parameters, i.e., translation and rotation, by registering the low-resolution MR images. The rotation estimation method is based on the pseudo-polar FFT, where the Fourier domain is composed of frequencies that reside in an oversampled set of non-angularly, equispaced points. The essence of the method is that unlike other Fourier-based registration schemes, the employed approach does not require any interpolation to calculate the pseudo-polar FFT grid coordinates. Translation parameters are estimated by the phase correlation method. However, instead of two-dimensional analysis of the phase correlation matrix, a low complexity subspace identification of the phase

  18. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    An estimate of the thickness of subcutaneous adipose tissue at differing positions around the body was required in a study examining body composition. To eliminate human error associated with the manual placement of markers for measurements and to facilitate the collection of data from a large...... number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...

  19. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  20. Rapid diagnosis and intraoperative margin assessment of human lung cancer with fluorescence lifetime imaging microscopy

    Directory of Open Access Journals (Sweden)

    Mengyan Wang

    2017-12-01

    Full Text Available A method of rapidly differentiating lung tumor from healthy tissue is extraordinarily needed for both the diagnosis and the intraoperative margin assessment. We assessed the ability of fluorescence lifetime imaging microscopy (FLIM for differentiating human lung cancer and normal tissues with the autofluorescence, and also elucidated the mechanism in tissue studies and cell studies. A 15-patient testing group was used to compare FLIM results with traditional histopathology diagnosis. Based on the endogenous fluorescence lifetimes of the testing group, a criterion line was proposed to distinguish normal and cancerous tissues. Then by blinded examined 41 sections from the validation group of other 16 patients, the sensitivity and specificity of FLIM were determined. The cellular metabolism was studied with specific perturbations of oxidative phosphorylation and glycolysis in cell studies. The fluorescence lifetime of cancerous lung tissues is consistently lower than normal tissues, and this is due to the both decrease of reduced nicotinamide adenine dinucleotide (NADH and flavin adenine dinucleotide (FAD lifetimes. A criterion line of lifetime at 1920 ps can be given for differentiating human lung cancer and normal tissues.The sensitivity and specificity of FLIM for lung cancer diagnosis were determined as 92.9% and 92.3%. These findings suggest that NADH and FAD can be used to rapidly diagnose lung cancer. FLIM is a rapid, accurate and highly sensitive technique in the judgment during lung cancer surgery and it can be potential in earlier cancer detection.

  1. Rapid diagnosis and intraoperative margin assessment of human lung cancer with fluorescence lifetime imaging microscopy.

    Science.gov (United States)

    Wang, Mengyan; Tang, Feng; Pan, Xiaobo; Yao, Longfang; Wang, Xinyi; Jing, Yueyue; Ma, Jiong; Wang, Guifang; Mi, Lan

    2017-12-01

    A method of rapidly differentiating lung tumor from healthy tissue is extraordinarily needed for both the diagnosis and the intraoperative margin assessment. We assessed the ability of fluorescence lifetime imaging microscopy (FLIM) for differentiating human lung cancer and normal tissues with the autofluorescence, and also elucidated the mechanism in tissue studies and cell studies. A 15-patient testing group was used to compare FLIM results with traditional histopathology diagnosis. Based on the endogenous fluorescence lifetimes of the testing group, a criterion line was proposed to distinguish normal and cancerous tissues. Then by blinded examined 41 sections from the validation group of other 16 patients, the sensitivity and specificity of FLIM were determined. The cellular metabolism was studied with specific perturbations of oxidative phosphorylation and glycolysis in cell studies. The fluorescence lifetime of cancerous lung tissues is consistently lower than normal tissues, and this is due to the both decrease of reduced nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD) lifetimes. A criterion line of lifetime at 1920 ps can be given for differentiating human lung cancer and normal tissues.The sensitivity and specificity of FLIM for lung cancer diagnosis were determined as 92.9% and 92.3%. These findings suggest that NADH and FAD can be used to rapidly diagnose lung cancer. FLIM is a rapid, accurate and highly sensitive technique in the judgment during lung cancer surgery and it can be potential in earlier cancer detection.

  2. Color error in the digital camera image capture process.

    Science.gov (United States)

    Penczek, John; Boynton, Paul A; Splett, Jolene D

    2014-04-01

    The color error in images taken by digital cameras is evaluated with respect to its sensitivity to the image capture conditions. A parametric study was conducted to investigate the dependence of image color error on camera technology, illumination spectra, and lighting uniformity. The measurement conditions were selected to simulate the variation that might be expected in typical telemedicine situations. Substantial color errors were observed, depending on the measurement conditions. Several image post-processing methods were also investigated for their effectiveness in reducing the color errors. The results of this study quantify the level of color error that may occur in the digital camera image capture process, and provide guidance for improving the color accuracy through appropriate changes in that process and in post-processing.

  3. Imaging Heat and Mass Transfer Processes Visualization and Analysis

    CERN Document Server

    Panigrahi, Pradipta Kumar

    2013-01-01

    Imaging Heat and Mass Transfer Processes: Visualization and Analysis applies Schlieren and shadowgraph techniques to complex heat and mass transfer processes. Several applications are considered where thermal and concentration fields play a central role. These include vortex shedding and suppression from stationary and oscillating bluff bodies such as cylinders, convection around crystals growing from solution, and buoyant jets. Many of these processes are unsteady and three dimensional. The interpretation and analysis of images recorded are discussed in the text.

  4. Schungite raw material quality evaluation using image processing method

    Science.gov (United States)

    Chertov, Aleksandr N.; Gorbunova, Elena V.; Sadovnichii, Roman V.; Rozhkova, Natalia N.

    2017-06-01

    In modern times when technologies are developing rapidly, the high-carbon schungite rocks of Karelia are promising mineral raw material for production of active fillers for composite materials, radio shielding materials, silicon carbide, stable aqueous dispersions, sorbents, catalysts, carbon nanomaterials, and other products. An intensive evolution of radiometric separation and sorting methods based on different physical phenomena occurring in the interaction of minerals and their constituent chemical elements with different types of radiation open new enrichment opportunities for schungite materials. This is especially pertinent to optical method of enrichment, which is a part of radiometric methods. The present work is devoted to the research and development of preliminary quality assessment principles for raw schungite on the basis of image processing principles and perspectives of the optical separation for its [schungite] enrichment. Obtained results of preliminary studies allow us to describe the selective criteria for separation of mentioned raw material by optical method, as well as to propose the method of quality indicator assessing for schungite raw materials. All conceptual and theoretical fundamentals are corroborated by the results of experimental studies of schungite rock samples with breccia and vein textures with different sizes from Maksovo deposit.

  5. Optical image processing by using a photorefractive spatial soliton waveguide

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Bao-Lai, E-mail: liangbaolai@gmail.com [College of Physics Science & Technology, Hebei University, Baoding 071002 (China); Wang, Ying; Zhang, Su-Heng; Guo, Qing-Lin; Wang, Shu-Fang; Fu, Guang-Sheng [College of Physics Science & Technology, Hebei University, Baoding 071002 (China); Simmonds, Paul J. [Department of Physics and Micron School of Materials Science & Engineering, Boise State University, Boise, ID 83725 (United States); Wang, Zhao-Qi [Institute of Modern Optics, Nankai University, Tianjin 300071 (China)

    2017-04-04

    By combining the photorefractive spatial soliton waveguide of a Ce:SBN crystal with a coherent 4-f system we are able to manipulate the spatial frequencies of an input optical image to perform edge-enhancement and direct component enhancement operations. Theoretical analysis of this optical image processor is presented to interpret the experimental observations. This work provides an approach for optical image processing by using photorefractive spatial solitons. - Highlights: • A coherent 4-f system with the spatial soliton waveguide as spatial frequency filter. • Manipulate the spatial frequencies of an input optical image. • Achieve edge-enhancement and direct component enhancement operations of an optical image.

  6. The Hawking evaporation process of rapidly-rotating black holes: an almost continuous cascade of gravitons

    Energy Technology Data Exchange (ETDEWEB)

    Hod, Shahar [The Ruppin Academic Center, Emek Hefer (Israel); The Hadassah Institute, Jerusalem (Israel)

    2015-07-15

    It is shown that rapidly-rotating Kerr black holes are characterized by the dimensionless ratio τ{sub gap}/τ{sub emission} = O(1), where τ{sub gap} is the average time gap between the emissions of successive Hawking quanta and τ{sub emission} is the characteristic timescale required for an individual Hawking quantum to be emitted from the black hole. This relation implies that the Hawking cascade from rapidly-rotating black holes has an almost continuous character. Our results correct some inaccurate claims that recently appeared in the literature regarding the nature of the Hawking black-hole evaporation process. (orig.)

  7. Digital image processing based mass flow rate measurement of gas/solid two-phase flow

    Energy Technology Data Exchange (ETDEWEB)

    Song Ding; Peng Lihui; Lu Geng; Yang Shiyuan [Tsinghua National Laboratory for Information Science and Technology, Department of Automation, Tsinghua University, Beijing, 100084 (China); Yan Yong, E-mail: lihuipeng@tsinghua.edu.c [University of Kent, Canterbury, Kent CT2 7NT (United Kingdom)

    2009-02-01

    With the rapid growth of the process industry, pneumatic conveying as a tool for the transportation of a wide variety of pulverized and granular materials has become widespread. In order to improve plant control and operational efficiency, it is essential to know the parameters of the particle flow. This paper presents a digital imaging based method which is capable of measuring multiple flow parameters, including volumetric concentration, velocity and mass flow rate of particles in the gas/solid two phase flow. The measurement system consists of a solid state laser for illumination, a low-cost CCD camera for particle image acquisition and a microcomputer with bespoke software for particle image processing. The measurements of particle velocity and volumetric concentration share the same sensing hardware but use different exposure time and different image processing methods. By controlling the exposure time of the camera a clear image and a motion blurred image are obtained respectively. The clear image is thresholded by OTSU method to identify the particles from the dark background so that the volumetric concentration is determined by calculating the ratio between the particle area and the total area. Particle velocity is derived from the motion blur length, which is estimated from the motion blurred images by using the travelling wave equation method. The mass flow rate of particles is calculated by combining the particle velocity and volumetric concentration. Simulation and experiment results indicate that the proposed method is promising for the measurement of multiple parameters of gas/solid two-phase flow.

  8. Two dimensional recursive digital filters for near real time image processing

    Science.gov (United States)

    Olson, D.; Sherrod, E.

    1980-01-01

    A program was designed toward the demonstration of the feasibility of using two dimensional recursive digital filters for subjective image processing applications that require rapid turn around. The concept of the use of a dedicated minicomputer for the processor for this application was demonstrated. The minicomputer used was the HP1000 series E with a RTE 2 disc operating system and 32K words of memory. A Grinnel 256 x 512 x 8 bit display system was used to display the images. Sample images were provided by NASA Goddard on a 800 BPI, 9 track tape. Four 512 x 512 images representing 4 spectral regions of the same scene were provided. These images were filtered with enhancement filters developed during this effort.

  9. High performance image processing of SPRINT

    Energy Technology Data Exchange (ETDEWEB)

    DeGroot, T. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  10. Detecting jaundice by using digital image processing

    Science.gov (United States)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

    2014-03-01

    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  11. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    Science.gov (United States)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  12. Poisson point processes imaging, tracking, and sensing

    CERN Document Server

    Streit, Roy L

    2010-01-01

    This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.

  13. Automatic construction of image inspection algorithm by using image processing network programming

    Science.gov (United States)

    Yoshimura, Yuichiro; Aoki, Kimiya

    2017-03-01

    In this paper, we discuss a method for automatic programming of inspection image processing. In the industrial field, automatic program generators or expert systems are expected to shorten a period required for developing a new appearance inspection system. So-called "image processing expert system" have been studied for over the nearly 30 years. We are convinced of the need to adopt a new idea. Recently, a novel type of evolutionary algorithms, called genetic network programming (GNP), has been proposed. In this study, we use GNP as a method to create an inspection image processing logic. GNP develops many directed graph structures, and shows excellent ability of formulating complex problems. We have converted this network program model to Image Processing Network Programming (IPNP). IPNP selects an appropriate image processing command based on some characteristics of input image data and processing log, and generates a visual inspection software with series of image processing commands. It is verified from experiments that the proposed method is able to create some inspection image processing programs. In the basic experiment with 200 test images, the success rate of detection of target region was 93.5%.

  14. Imaging and Controlling Ultrafast Ionization Processes

    Science.gov (United States)

    Schafer, Kenneth

    2008-05-01

    We describe how the combination of an attosecond pulse train (APT) and a synchronized infrared (IR) laser field can be used to image and control ionization dynamics in atomic systems. In two recent experiments, attosecond pulses were used to create a sequence of electron wave packets (EWPs) near the ionization threshold in helium. In the first experiment^, the EWPs were created just below the ionization threshold, and the ionization probability was found to vary strongly with the IR/APT delay. Calculations that reproduce the experimental results demonstrate that this ionization control results from interference between transiently bound EWPs created by different pulses in the train. In the second experiment^, the APT was tailored to produce a sequence of identical EWPs just above the ionization threshold exactly once per laser cycle, allowing us to study a single ionization event stroboscopically. This technique has enabled us to image the coherent electron scattering that takes place when the IR field is sufficiently strong to reverse the initial direction of the electron motion causing it to re-scatter from its parent ion.^P. Johnsson, et al., PRL 99, 233001 (2007).^J. Mauritsson, et al. PRL, to appear (2008).In collaboration with A. L'Huillier, J. Mauritsson, P. Johnsson, T. Remetter, E. Mantsen, M. Swoboda, and T. Ruchon.

  15. Novel ultra-rapid freezing particle engineering process for enhancement of dissolution rates of poorly water-soluble drugs.

    Science.gov (United States)

    Overhoff, Kirk A; Engstrom, Josh D; Chen, Bo; Scherzer, Brian D; Milner, Thomas E; Johnston, Keith P; Williams, Robert O

    2007-01-01

    An ultra-rapid freezing (URF) technology has been developed to produce high surface area powders composed of solid solutions of an active pharmaceutical ingredient (API) and a polymer stabilizer. A solution of API and polymer excipient(s) is spread on a cold solid surface to form a thin film that freezes in 50 ms to 1s. This study provides an understanding of how the solvent's physical properties and the thin film geometry influence the freezing rate and consequently the final physico-chemical properties of URF-processed powders. Theoretical calculations of heat transfer rates are shown to be in agreement with infrared images with 10ms resolution. Danazol (DAN)/polyvinylpyrrolidone (PVP) powders, produced from both acetonitrile (ACN) and tert-butanol (T-BUT) as the solvent, were amorphous with high surface areas (approximately 28-30 m2/g) and enhanced dissolution rates. However, differences in surface morphology were observed and attributed to the cooling rate (film thickness) as predicted by the model. Relative to spray-freezing processes that use liquid nitrogen, URF also offers fast heat transfer rates as a result of the intimate contact between the solution and cold solid surface, but without the complexity of cryogen evaporation (Leidenfrost effect). The ability to produce amorphous high surface area powders with submicron primary particles with a simple ultra-rapid freezing process is of practical interest in particle engineering to increase dissolution rates, and ultimately bioavailability.

  16. Processing, analysis, recognition, and automatic understanding of medical images

    Science.gov (United States)

    Tadeusiewicz, Ryszard; Ogiela, Marek R.

    2004-07-01

    Paper presents some new ideas introducing automatic understanding of the medical images semantic content. The idea under consideration can be found as next step on the way starting from capturing of the images in digital form as two-dimensional data structures, next going throw images processing as a tool for enhancement of the images visibility and readability, applying images analysis algorithms for extracting selected features of the images (or parts of images e.g. objects), and ending on the algorithms devoted to images classification and recognition. In the paper we try to explain, why all procedures mentioned above can not give us full satisfaction in many important medical problems, when we do need understand image semantic sense, not only describe the image in terms of selected features and/or classes. The general idea of automatic images understanding is presented as well as some remarks about the successful applications of such ideas for increasing potential possibilities and performance of computer vision systems dedicated to advanced medical images analysis. This is achieved by means of applying linguistic description of the picture merit content. After this we try use new AI methods to undertake tasks of the automatic understanding of images semantics in intelligent medical information systems. A successful obtaining of the crucial semantic content of the medical image may contribute considerably to the creation of new intelligent multimedia cognitive medical systems. Thanks to the new idea of cognitive resonance between stream of the data extracted form the image using linguistic methods and expectations taken from the representation of the medical knowledge, it is possible to understand the merit content of the image even if the form of the image is very different from any known pattern.

  17. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  18. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  19. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the same six pairs of

  20. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG

    2016-02-01

    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  1. Remote sensing models and methods for image processing

    CERN Document Server

    Schowengerdt, Robert A

    1997-01-01

    This book is a completely updated, greatly expanded version of the previously successful volume by the author. The Second Edition includes new results and data, and discusses a unified framework and rationale for designing and evaluating image processing algorithms.Written from the viewpoint that image processing supports remote sensing science, this book describes physical models for remote sensing phenomenology and sensors and how they contribute to models for remote-sensing data. The text then presents image processing techniques and interprets them in terms of these models. Spectral, s

  2. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  3. Rapidity regulators in the semi-inclusive deep inelastic scattering and Drell-Yan processes

    Science.gov (United States)

    Fleming, Sean; Labun, Ou Z.

    2017-06-01

    We study the semi-inclusive limit of the deep inelastic scattering and Drell-Yan (DY) processes in soft collinear effective theory. In this regime so-called threshold logarithms must be resummed to render perturbation theory well behaved. Part of this resummation occurs via the Dokshitzer, Gribov, Lipatov, Altarelli, Parisi (DGLAP) equation, which at threshold contains a large logarithm that calls into question the convergence of the anomalous dimension. We demonstrate here that the problematic logarithm is related to rapidity divergences, and by introducing a rapidity regulator can be tamed. We show that resumming the rapidity logarithms allows us to reproduce the standard DGLAP running at threshold as long as a set of potentially large nonperturbative logarithms are absorbed into the definition of the parton distribution function (PDF). These terms could, in turn, explain the steep falloff of the PDF in the end point. We then go on to show that the resummation of rapidity divergences does not change the standard threshold resummation in DY, nor do our results depend on the rapidity regulator we choose to use.

  4. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing.

    Science.gov (United States)

    Zhao, Jing; Kwok, Rosa K W; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2016-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These

  5. An image-processing analysis of skin textures.

    Science.gov (United States)

    Sparavigna, A; Marazzato, R

    2010-05-01

    This paper discusses an image-processing method applied to skin texture analysis. Considering that the characterisation of human skin texture is a task approached only recently by image processing, our goal is to lay out the benefits of this technique for quantitative evaluations of skin features and localisation of defects. We propose a method based on a statistical approach to image pattern recognition. The results of our statistical calculations on the grey-tone distributions of the images are proposed in specific diagrams, the coherence length diagrams. Using the coherence length diagrams, we were able to determine grain size and anisotropy of skin textures. Maps showing the localisation of defects are also proposed. According to the chosen statistical parameters of grey-tone distribution, several procedures to defect detection can be proposed. Here, we follow a comparison of the local coherence lengths with their average values. More sophisticated procedures, suggested by clinical experience, can be used to improve the image processing.

  6. Image and Sensor Data Processing for Target Acquisition and Recognition.

    Science.gov (United States)

    1980-11-01

    reprisontativo d’images d’antratne- mont dout il connait la viriti terrain . Pour chacune des cibles do cec images, lordinateur calculera les n paramitres...l’objet, glissement limitd A sa lergeur. DOaprds las rdsultets obtenus jusqu’A meintenent, nous navons pas observE de glissement impor- tant et ATR> I TR...AEROSPACE RESEARCH AND DEVELOPMENT (ORGANISATION DU TRAITE DE L’ATLANTIQUE NORD) AGARDonferenceJoceedin io.290 IMAGE AND SENSOR DATA PROCESSING FOR TARGET

  7. Processing of hyperspectral medical images applications in dermatology using Matlab

    CERN Document Server

    Koprowski, Robert

    2017-01-01

    This book presents new methods of analyzing and processing hyperspectral medical images, which can be used in diagnostics, for example for dermatological images. The algorithms proposed are fully automatic and the results obtained are fully reproducible. Their operation was tested on a set of several thousands of hyperspectral images and they were implemented in Matlab. The presented source code can be used without licensing restrictions. This is a valuable resource for computer scientists, bioengineers, doctoral students, and dermatologists interested in contemporary analysis methods.

  8. Fast Transforms in Image Processing: Compression, Restoration, and Resampling

    Directory of Open Access Journals (Sweden)

    Leonid P. Yaroslavsky

    2014-01-01

    Full Text Available Transform image processing methods are methods that work in domains of image transforms, such as Discrete Fourier, Discrete Cosine, Wavelet, and alike. They proved to be very efficient in image compression, in image restoration, in image resampling, and in geometrical transformations and can be traced back to early 1970s. The paper reviews these methods, with emphasis on their comparison and relationships, from the very first steps of transform image compression methods to adaptive and local adaptive filters for image restoration and up to “compressive sensing” methods that gained popularity in last few years. References are made to both first publications of the corresponding results and more recent and more easily available ones. The review has a tutorial character and purpose.

  9. Recent applications of Chemical Imaging to pharmaceutical process monitoring and quality control.

    Science.gov (United States)

    Gowen, A A; O'Donnell, C P; Cullen, P J; Bell, S E J

    2008-05-01

    Chemical Imaging (CI) is an emerging platform technology that integrates conventional imaging and spectroscopy to attain both spatial and spectral information from an object. Vibrational spectroscopic methods, such as Near Infrared (NIR) and Raman spectroscopy, combined with imaging are particularly useful for analysis of biological/pharmaceutical forms. The rapid, non-destructive and non-invasive features of CI mark its potential suitability as a process analytical tool for the pharmaceutical industry, for both process monitoring and quality control in the many stages of drug production. This paper provides an overview of CI principles, instrumentation and analysis. Recent applications of Raman and NIR-CI to pharmaceutical quality and process control are presented; challenges facing CI implementation and likely future developments in the technology are also discussed.

  10. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. AOTF-based near-infrared imaging spectrometer for rapid identification of camouflaged target

    Science.gov (United States)

    Gao, Zhifan; Zeng, Libo; Wu, Qiongshui

    2014-11-01

    Acousto-optic tunable filter (AOTF) is a novel device for spectrometer. The electronic tunability qualifies it with the most compelling advantages of higher wavelength scan rate over the conventional spectrometers that are mechanically tuned, and the feature of large angular aperture makes the AOTF particularly suitable in imaging applications. In this research, an AOTF-based near-infrared imaging spectrometer was developed. The spectrometer consists of a TeO2 AOTF module, a near-infrared imaging lens assembly, an AOTF controller, an InGaAs array detector, an image acquisition card, and a PC. A precisely designed optical wedge is placed at the emergent surface of the AOTF to deal with the inherent dispersion of the TeO2 that may degrade the spatial resolution. The direct digital synthesizer (DDS) techniques and the phase locked loop (PLL) techniques are combined for radio frequency (RF) signal synthesis. The PLL is driven by the DDS to take advantage of both their merits of high frequency resolution, high frequency scan rate and strong spurious signals resistance capability. All the functions relating to wavelength scan, image acquisition, processing, storge and display are controlled by the PC. Calibration results indicate that the spectral range is 898~1670 nm, the spectral resolution is 6.8 nm(@1064 nm), the wavelength separation between frames in the spectral image assembly is 1.0 nm, and the processing time of a single image is less than 1 ms if a TV camera with 640×512 detector is incorporated. A prototype device was assembled to test the capability of differentiating samples with similar appearances, and satisfactory results were achieved. By this device, the chemical compositions and the distribution information can be obtained simultaneously. This system has the most advantages of no moving parts, fast wavelength scan and strong vibration resistance. The proposed imaging spectrometer has a significant application prospect in the area of identification of

  12. Digital image processing and analysis for activated sludge wastewater treatment.

    Science.gov (United States)

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  13. A new method of SC image processing for confluence estimation.

    Science.gov (United States)

    Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina

    2017-10-01

    Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. VICAR-DIGITAL image processing system

    Science.gov (United States)

    Billingsley, F.; Bressler, S.; Friden, H.; Morecroft, J.; Nathan, R.; Rindfleisch, T.; Selzer, R.

    1969-01-01

    Computer program corrects various photometic, geometric and frequency response distortions in pictures. The program converts pictures to a number of elements, with each elements optical density quantized to a numerical value. The translated picture is recorded on magnetic tape in digital form for subsequent processing and enhancement by computer.

  15. Natural image statistics and visual processing

    NARCIS (Netherlands)

    van der Schaaf, Arjen

    1998-01-01

    The visual system of a human or animal that functions in its natural environment receives huge amounts of visual information. This information is vital for the survival of the organism. In this thesis I follow the hypothesis that evolution has optimised the biological visual system to process the

  16. Three stages of emotional word processing: an ERP study with rapid serial visual presentation.

    Science.gov (United States)

    Zhang, Dandan; He, Weiqi; Wang, Ting; Luo, Wenbo; Zhu, Xiangru; Gu, Ruolei; Li, Hong; Luo, Yue-Jia

    2014-12-01

    Rapid responses to emotional words play a crucial role in social communication. This study employed event-related potentials to examine the time course of neural dynamics involved in emotional word processing. Participants performed a dual-target task in which positive, negative and neutral adjectives were rapidly presented. The early occipital P1 was found larger when elicited by negative words, indicating that the first stage of emotional word processing mainly differentiates between non-threatening and potentially threatening information. The N170 and the early posterior negativity were larger for positive and negative words, reflecting the emotional/non-emotional discrimination stage of word processing. The late positive component not only distinguished emotional words from neutral words, but also differentiated between positive and negative words. This represents the third stage of emotional word processing, the emotion separation. Present results indicated that, similar with the three-stage model of facial expression processing; the neural processing of emotional words can also be divided into three stages. These findings prompt us to believe that the nature of emotion can be analyzed by the brain independent of stimulus type, and that the three-stage scheme may be a common model for emotional information processing in the context of limited attentional resources. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. A study of correlation technique on pyramid processed images

    Indian Academy of Sciences (India)

    The pyramid algorithm is potentially a powerful tool for advanced television image processing and for pattern recognition. An attempt is made to design and develop both hardware and software for a system which performs decomposition and reconstruction of digitized images by implementing the Burt pyramid algorithm.

  18. Image processing for drift compensation in fluorescence microscopy

    DEFF Research Database (Denmark)

    Petersen, Steffen; Thiagarajan, Viruthachalam; Coutinho, Isabel

    2013-01-01

    Fluorescence microscopy is characterized by low background noise, thus a fluorescent object appears as an area of high signal/noise. Thermal gradients may result in apparent motion of the object, leading to a blurred image. Here, we have developed an image processing methodology that may remove/r...

  19. A novel data processing technique for image reconstruction of penumbral imaging

    Science.gov (United States)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  20. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail: M.H.Yap@lboro.ac.uk; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2010-03-15

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  1. Information fusion in signal and image processing major probabilistic and non-probabilistic numerical approaches

    CERN Document Server

    Bloch, Isabelle

    2010-01-01

    The area of information fusion has grown considerably during the last few years, leading to a rapid and impressive evolution. In such fast-moving times, it is important to take stock of the changes that have occurred. As such, this books offers an overview of the general principles and specificities of information fusion in signal and image processing, as well as covering the main numerical methods (probabilistic approaches, fuzzy sets and possibility theory and belief functions).

  2. A rapid Look-Locker imaging sequence for quantitative tissue oximetry

    Science.gov (United States)

    Vidya Shankar, Rohini; Kodibagkar, Vikram D.

    2015-03-01

    Tissue oximetry studies using magnetic resonance imaging are increasingly contributing to advances in the imaging and treatment of cancer. The non-invasive measurement of tissue oxygenation (pO2) may facilitate a better understanding of the pathophysiology and prognosis of diseases, particularly in the assessment of the extensive hypoxic regions associated with cancerous lesions. The availability of tumor hypoxia maps could help quantify and predict tumor response to intervention and therapy. The PISTOL (Proton Imaging of Siloxanes to map Tissue Oxygenation Levels) oximetry technique maps the T1 of administered hexamethyldisiloxane (HMDSO), an 1H NMR pO2 reporter molecule in about 3 ½ min. This allows us to subsequently monitor static and dynamic changes in the tissue pO2 (in response to intervention) at various locations due to the linear relationship between 1/T1 and pO2. In this work, an HMDSO-selective Look-Locker imaging sequence with EPI readout has been developed to enable faster PISTOL acquisitions. The new sequence incorporates the fast Look-Locker measurement method to enable T1, and hence, pO2 mapping of HMDSO in under one minute. To demonstrate the application of this pulse sequence in vivo, 50 μL of neat HMDSO was administered to the thigh muscle of a healthy rat (Fischer F344, n=4). Dynamic changes in the mean pO2 of the thigh muscle were measured using both PISTOL and the developed LL oximetry sequence in response to oxygen challenge and compared. Results demonstrate the efficacy of the new sequence in rapidly mapping the pO2 changes, leading to advances in fast quantitative 1H MR oximetry.

  3. Characterization of Periodically Poled Nonlinear Materials Using Digital Image Processing

    National Research Council Canada - National Science Library

    Alverson, James R

    2008-01-01

    .... A new approach based on image processing across an entire z+ or z- surface of a poled crystal allows for better quantification of the underlying domain structure and directly relates to device performance...

  4. Application of digital image processing techniques to astronomical imagery 1977

    Science.gov (United States)

    Lorre, J. J.; Lynn, D. J.

    1978-01-01

    Nine specific techniques of combination of techniques developed for applying digital image processing technology to existing astronomical imagery are described. Photoproducts are included to illustrate the results of each of these investigations.

  5. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  6. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately...

  7. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  8. Priming from distractors in rapid serial visual presentation is modulated by image properties and attention.

    Science.gov (United States)

    Harris, Irina M; Benito, Claire T; Dux, Paul E

    2010-12-01

    We investigated distractor processing in a dual-target rapid serial visual presentation (RSVP) task containing familiar objects, by measuring repetition priming from a priming distractor (PD) to Target 2 (T2). Priming from a visually identical PD was contrasted with priming from a PD in a different orientation from T2. We also tested the effect of attention on distractor processing, by placing the PD either within or outside the attentional blink (AB). PDs outside the AB induced positive priming when they were in a different orientation to T2 and no priming, or negative priming, when they were perceptually identical to T2. PDs within the AB induced positive priming regardless of orientation. These findings demonstrate (1) that distractors are processed at multiple levels of representation; (2) that the view-specific representations of distractors are actively suppressed during RSVP; and (3) that this suppression fails in the absence of attention.

  9. The Digital Microscope and Its Image Processing Utility

    Directory of Open Access Journals (Sweden)

    Tri Wahyu Supardi

    2011-12-01

    Full Text Available Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images of the object being observed. The proposed microscope is constructed from hardware components that can be easily found in Indonesia. The image processing software is capable of performing brightness adjustment, contrast enhancement, histogram equalization, scaling and cropping. The proposed digital microscope has a maximum magnification of 1600x, and image resolution can be varied from 320x240 pixels up to 2592x1944 pixels. The microscope was tested with various objects with a variety of magnification, and image processing was carried out on the image of the object. The results showed that the digital microscope and its image processing system were capable of enhancing the observed object and other operations in accordance with the user need. The digital microscope has eliminated the need for direct observation by human eye as with the traditional microscope.

  10. Arabidopsis Growth Simulation Using Image Processing Technology

    Directory of Open Access Journals (Sweden)

    Junmei Zhang

    2014-01-01

    Full Text Available This paper aims to provide a method to represent the virtual Arabidopsis plant at each growth stage. It includes simulating the shape and providing growth parameters. The shape is described with elliptic Fourier descriptors. First, the plant is segmented from the background with the chromatic coordinates. With the segmentation result, the outer boundary series are obtained by using boundary tracking algorithm. The elliptic Fourier analysis is then carried out to extract the coefficients of the contour. The coefficients require less storage than the original contour points and can be used to simulate the shape of the plant. The growth parameters include total area and the number of leaves of the plant. The total area is obtained with the number of the plant pixels and the image calibration result. The number of leaves is derived by detecting the apex of each leaf. It is achieved by using wavelet transform to identify the local maximum of the distance signal between the contour points and the region centroid. Experiment result shows that this method can record the growth stage of Arabidopsis plant with fewer data and provide a visual platform for plant growth research.

  11. Parallel Computers for Region-Level Image Processing.

    Science.gov (United States)

    1980-11-01

    It is well known that parallel computers can be used very effectively for image processing at the pixel level, by assigning a processor to each pixel...or block of pixels, and passing information as necessary between processors whose blocks are adjacent. This paper discusses the use of parallel ... computers for processing images at the region level, assigning a processor to each region and passing information between processors whose regions are

  12. Digital image processing for the earth resources technology satellite data.

    Science.gov (United States)

    Will, P. M.; Bakis, R.; Wesley, M. A.

    1972-01-01

    This paper discusses the problems of digital processing of the large volumes of multispectral image data that are expected to be received from the ERTS program. Correction of geometric and radiometric distortions are discussed and a byte oriented implementation is proposed. CPU timing estimates are given for a System/360 Model 67, and show that a processing throughput of 1000 image sets per week is feasible.

  13. The Digital Microscope and Its Image Processing Utility

    OpenAIRE

    Tri Wahyu Supardi; Agus Harjoko; Sri Hartati

    2011-01-01

    Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images o...

  14. Techniques and software architectures for medical visualisation and image processing

    OpenAIRE

    Botha, C.P.

    2005-01-01

    This thesis presents a flexible software platform for medical visualisation and image processing, a technique for the segmentation of the shoulder skeleton from CT data and three techniques that make contributions to the field of direct volume rendering. Our primary goal was to investigate the use of visualisation techniques to assist the shoulder replacement process. This motivated the need for a flexible environment within which to test and develop new visualisation and also image processin...

  15. Automated measurement of pressure injury through image processing.

    Science.gov (United States)

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YCb Cr colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries

  16. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  17. Rapidly destructive arthrosis of the shoulder joints: radiographic, magnetic resonance imaging, and histopathologic findings.

    Science.gov (United States)

    Kekatpure, Aashay L; Sun, Ji-Ho; Sim, Gyeong-Bo; Chun, Jae-Myeung; Jeon, In-Ho

    2015-06-01

    Rapidly destructive arthrosis of the humeral head is a rare condition with an elusive pathophysiologic mechanism. In this study, radiographic and histopathologic findings were analyzed to determine the clinical characteristics of this rare condition. We retrospectively analyzed 189 patients who underwent total shoulder arthroplasty from January 2001 to August 2012. Among them, 9 patients showed a particular pattern of rapid collapse of the humeral head on plain radiography and magnetic resonance imaging (MRI) within 12 months from symptom onset. Patients with trauma, rheumatoid arthritis, steroid intake, neurologic osteoarthropathy, osteonecrosis, renal osteoarthropathy, or gout were excluded. All patients were women, with a mean age of 72.0 years (range, 63-85 years). The right side was involved in 7 cases and the left in 2 cases. The mean duration of humeral head collapse was 5.6 months (range, 2-11 months) from the onset of shoulder pain. Plain radiographs of all patients showed a unique pattern of humeral head flattening, which appeared like a clean surgical cut with bone debris around the humeral head. MRI findings revealed significant joint effusion and bone marrow edema in the humeral head, without involvement of the glenoid. Pathologic findings showed both fragmentation and regeneration of bone matrix, representing fracture healing. The important features of rapidly destructive shoulder arthrosis are unique flattened humeral head collapse with MRI showing massive joint effusion and bone marrow edema in the remnant humeral head. This condition should be considered in the differential diagnosis of elderly women with insidious shoulder pain. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  18. Modification of microstructure and micromagnetic properties in Gd-Fe thin films by rapid thermal processing

    OpenAIRE

    Talapatraa, A.; Chelvane, J. Arout; Satpati, B.; Kumar, S.; Mohanty, J.

    2017-01-01

    Impact of rapid thermal processing (RTP) on microstructure and magnetic properties of Gd-Fe thin films have been investigated with a special emphasis to magnetic microstructure. 100 nm thick amorphous Gd-Fe film shows elongated stripe domains with characteristic feature size of 122 nm, which signifies the development of perpendicular magnetic anisotropy (PMA) in this system. RTP at 550^oC for different time intervals viz. 5, 10, 15, 20 minutes induces the crystallization of Fe over the amorph...

  19. Wind Tunnel Model Design and Test Using Rapid Prototype Materials and Processes

    Science.gov (United States)

    2001-07-23

    UNCLASSIFIED WIND TUNNEL MODEL DESIGN AND TEST USING RAPID PROTOTYPE MATERIALS AND PROCESSES Richard R. Heisler and Clifford L. Ratliff The Johns Hopkins...deflection, and attach directly to the strongback with screws. A and tolerance deviations when the material was grown. schematic diagram of the RPM...constructed around the clay to contain the I. R. R. Heisler , "Final Test Report for the Wind pouring of silicon resin. Tunnel Test of the JHU/APL WTM-01 at

  20. Validation of Contamination Control in Rapid Transfer Port Chambers for Pharmaceutical Manufacturing Processes

    OpenAIRE

    Shih-Cheng Hu; Angus Shiue; Han-Yang Liu; Rong-Ben Chiu

    2016-01-01

    There is worldwide concern with regard to the adverse effects of drug usage. However, contaminants can gain entry into a drug manufacturing process stream from several sources such as personnel, poor facility design, incoming ventilation air, machinery and other equipment for production, etc. In this validation study, we aimed to determine the impact and evaluate the contamination control in the preparation areas of the rapid transfer port (RTP) chamber during the pharmaceutical manufacturing...

  1. Age, dyslexia subtype and comorbidity modulate rapid auditory processing in developmental dyslexia

    OpenAIRE

    Maria Luisa eLorusso; Chiara eCantiani; Massimo eMolteni

    2014-01-01

    The nature of Rapid Auditory Processing (RAP) deficits in dyslexia remains debated, together with the specificity of the problem to certain types of stimuli and/or restricted subgroups of individuals. Following the hypothesis that the heterogeneity of the dyslexic population may have led to contrasting results, the aim of the study was to define the effect of age, dyslexia subtype and comorbidity on the discrimination and reproduction of non-verbal tone sequences. Participants were 46 childre...

  2. Development of Rapid Temporal Processing and Its Impact on Literacy Skills in Primary School Children

    Science.gov (United States)

    Steinbrink, Claudia; Zimmer, Karin; Lachmann, Thomas; Dirichs, Martin; Kammer, Thomas

    2014-01-01

    In a longitudinal study, auditory and visual temporal order thresholds (TOTs) were investigated in primary school children (N = 236; mean age at first data point = 6;7) at the beginning of Grade 1 and the end of Grade 2 to test whether rapid temporal processing abilities predict reading and spelling at the end of Grades 1 and 2. Auditory and…

  3. Rapid imaging, detection and quantification of Giardia lamblia cysts using mobile-phone based fluorescent microscopy and machine learning.

    Science.gov (United States)

    Koydemir, Hatice Ceylan; Gorocs, Zoltan; Tseng, Derek; Cortazar, Bingen; Feng, Steve; Chan, Raymond Yan Lok; Burbano, Jordi; McLeod, Euan; Ozcan, Aydogan

    2015-03-07

    Rapid and sensitive detection of waterborne pathogens in drinkable and recreational water sources is crucial for treating and preventing the spread of water related diseases, especially in resource-limited settings. Here we present a field-portable and cost-effective platform for detection and quantification of Giardia lamblia cysts, one of the most common waterborne parasites, which has a thick cell wall that makes it resistant to most water disinfection techniques including chlorination. The platform consists of a smartphone coupled with an opto-mechanical attachment weighing ~205 g, which utilizes a hand-held fluorescence microscope design aligned with the camera unit of the smartphone to image custom-designed disposable water sample cassettes. Each sample cassette is composed of absorbent pads and mechanical filter membranes; a membrane with 8 μm pore size is used as a porous spacing layer to prevent the backflow of particles to the upper membrane, while the top membrane with 5 μm pore size is used to capture the individual Giardia cysts that are fluorescently labeled. A fluorescence image of the filter surface (field-of-view: ~0.8 cm(2)) is captured and wirelessly transmitted via the mobile-phone to our servers for rapid processing using a machine learning algorithm that is trained on statistical features of Giardia cysts to automatically detect and count the cysts captured on the membrane. The results are then transmitted back to the mobile-phone in less than 2 minutes and are displayed through a smart application running on the phone. This mobile platform, along with our custom-developed sample preparation protocol, enables analysis of large volumes of water (e.g., 10-20 mL) for automated detection and enumeration of Giardia cysts in ~1 hour, including all the steps of sample preparation and analysis. We evaluated the performance of this approach using flow-cytometer-enumerated Giardia-contaminated water samples, demonstrating an average cyst capture

  4. Study of gray image pseudo-color processing algorithms

    Science.gov (United States)

    Hu, Jinlong; Peng, Xianrong; Xu, Zhiyong

    In gray images which contain abundant information, if the differences between adjacent pixels' intensity are small, the required information can not be extracted by humans, since humans are more sensitive to color images than gray images. If gray images are transformed to pseudo-color images, the details of images will be more explicit, and the target will be recognized more easily. There are two methods (in frequency field and in spatial field) to realize pseudo-color enhancement of gray images. The first method is mainly the filtering in frequency field, and the second is the equal density pseudo-color coding methods which mainly include density segmentation coding, function transformation and complementary pseudo-color coding. Moreover, there are many other methods to realize pseudo-color enhancement, such as pixel's self-transformation based on RGB tri-primary, pseudo-color coding from phase-modulated image based on RGB color model, pseudo-color coding of high gray-resolution image, et al. However, above methods are tailored to a particular situation and transformations are based on RGB color space. In order to improve the visual effect, the method based on RGB color space and pixels' self-transformation is improved in this paper, which is based on HIS color space. Compared with other methods, some gray images with ordinary formats can be processed, and many gray images can be transformed to pseudo-color images with 24 bits. The experiment shows that the processed image has abundant levels, which is consistent with human's perception.

  5. Rapid and noncontact photoacoustic tomography imaging system using an interferometer with high-speed phase modulation technique

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Jun [School of Physics and Telecom Engineering, South China Normal University, Guangzhou 510006 (China); Tang, Zhilie; Wu, Yongbo [School of Physics and Telecom Engineering, South China Normal University, Guangzhou 510006 (China); GuangDong Province Key Laboratory of Quantum Engineering and Quantum Materials, South China Normal University, IMOT, Guangzhou 510006 (China); Wang, Yi [School of Control Engineering, Northeastern University at Qinhuangdao, Qinhuangdao 066004 (China)

    2015-04-15

    We designed, fabricated, and tested a rapid and noncontact photoacoustic tomography (PAT) imaging system using a low-coherence interferometer with high-speed phase modulation technique. Such a rapid and noncontact probing system can greatly decrease the time of imaging. The proposed PAT imaging system is experimentally verified by capturing images of a simulated tissue sample and the blood vessels within the ear flap of a mouse (pinna) in vivo. The axial and lateral resolutions of the system are evaluated at 45 and ∼15 μm, respectively. The imaging depth of the system is 1 mm in a special phantom. Our results show that the proposed system opens a promising way to realize noncontact, real-time PAT.

  6. ON GEOMETRIC PROCESSING OF MULTI-TEMPORAL IMAGE DATA COLLECTED BY LIGHT UAV SYSTEMS

    Directory of Open Access Journals (Sweden)

    T. Rosnell

    2012-09-01

    Full Text Available Data collection under highly variable weather and illumination conditions around the year will be necessary in many applications of UAV imaging systems. This is a new feature in rigorous photogrammetric and remote sensing processing. We studied performance of two georeferencing and point cloud generation approaches using image data sets collected in four seasons (winter, spring, summer and autumn and under different imaging conditions (sunny, cloudy, different solar elevations. We used light, quadrocopter UAVs equipped with consumer cameras. In general, matching of image blocks collected with high overlaps provided high quality point clouds. All of the before mentioned factors influenced the point cloud quality. In winter time, the point cloud generation failed on uniform snow surfaces in many situations, and during leaf-off season the point cloud generation was not successful over deciduous trees. The images collected under cloudy conditions provided better point clouds than the images collected in sunny weather in shadowed regions and of tree surfaces. On homogeneous surfaces (e.g. asphalt the images collected under sunny conditions outperformed cloudy data. The tested factors did not influence the general block adjustment results. The radiometric sensor performance (especially signal-to-noise ratio is a critical factor in all weather data collection and point cloud generation; at the moment, high quality, light weight imaging sensors are still largely missing; sensitivity to wind is another potential limitation. There lies a great potential in low flying, low cost UAVs especially in applications requiring rapid aerial imaging for frequent monitoring.

  7. Dual-scanning optical coherence elastography for rapid imaging of two tissue volumes (Conference Presentation)

    Science.gov (United States)

    Fang, Qi; Frewer, Luke; Wijesinghe, Philip; Hamzah, Juliana; Ganss, Ruth; Allen, Wes M.; Sampson, David D.; Curatolo, Andrea; Kennedy, Brendan F.

    2017-02-01

    In many applications of optical coherence elastography (OCE), it is necessary to rapidly acquire images in vivo, or within intraoperative timeframes, over fields-of-view far greater than can be achieved in one OCT image acquisition. For example, tumour margin assessment in breast cancer requires acquisition over linear dimensions of 4-5 centimetres in under 20 minutes. However, the majority of existing techniques are not compatible with these requirements, which may present a hurdle to the effective translation of OCE. To increase throughput, we have designed and developed an OCE system that simultaneously captures two 3D elastograms from opposite sides of a sample. The optical system comprises two interferometers: a common-path interferometer on one side of the sample and a dual-arm interferometer on the other side. This optical system is combined with scanning mechanisms and compression loading techniques to realize dual-scanning OCE. The optical signals scattered from two volumes are simultaneously detected on a single spectrometer by depth-encoding the interference signal from each interferometer. To demonstrate dual-scanning OCE, we performed measurements on tissue-mimicking phantoms containing rigid inclusions and freshly isolated samples of murine hepatocellular carcinoma, highlighting the use of this technique to visualise 3D tumour stiffness. These findings indicate that our technique holds promise for in vivo and intraoperative applications.

  8. Rapid in situ biosynthesis of gold nanoparticles in living platelets for multimodal biomedical imaging.

    Science.gov (United States)

    Jin, Juan; Liu, Taotao; Li, Mingxi; Yuan, Chuxiao; Liu, Yang; Tang, Jian; Feng, Zhenqiang; Zhou, Yue; Yang, Fang; Gu, Ning

    2018-01-10

    Inspired by the nature, the biomimetic nanomaterial design strategies have attracted great interest because the bioinspired nanoplatforms may enhance the functionality of current nanoparticles. Especially, the cell membrane-derived nanoparticles can more effectively navigate and interact with the complex biological microenvironment. In this study, we have explored a novel strategy to rapidly in situ biosynthesize gold nanoparticles (GNPs) in living platelets with the help of ultrasound energy. Firstly, under the ultrasound exposure, the biocompatible chloroauric acid salts (HAuCl 4 ) can be enhanced to permeate into the platelet cytoplasm. Then, by the assist of reducing agent (NaBH 4 and sodium citrate) and platelet enzyme, GNPs were fast in situ synthesized in intra-platelets. The biosynthesized GNPs had a size of about 5 nm and were uniformly distributed in the cytoplasm. Atomic absorption spectrometry (AAS) showed the synthesized amount of Au is (12.7 ± 2.4) × 10 -3  pg per one platelet. The GNPs in platelets can produce Raman enhancement effect and further be probed for both dark-field microscopy (DFM)-based imaging and computed tomography (CT) imaging. Moreover, the platelets were not activated and remained aggregation bioactivity when intra-platelet GNPs synthesis. Therefore, such mimicking GNPs-platelets with in situ GNPs components remain inherent platelet bioactivity will find potential theranostic implications with unique GNPs properties. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Nosocomial rapidly growing mycobacterial infections following laparoscopic surgery: CT imaging findings

    Energy Technology Data Exchange (ETDEWEB)

    Volpato, Richard [Cassiano Antonio de Moraes University Hospital, Department of Diagnostic Radiology, Vitoria, ES (Brazil); Campi de Castro, Claudio [University of Sao Paulo Medical School, Department of Radiology, Cerqueira Cesar, Sao Paulo (Brazil); Hadad, David Jamil [Cassiano Antonio de Moraes University Hospital, Nucleo de Doencas Infecciosas, Department of Internal Medicine, Vitoria, ES (Brazil); Silva Souza Ribeiro, Flavya da [Laboratorio de Patologia PAT, Department of Diagnostic Radiology, Unit 1473, Vitoria, ES (Brazil); Filho, Ezequiel Leal [UNIMED Diagnostico, Department of Diagnostic Radiology, Unit 1473, Vitoria, ES (Brazil); Marcal, Leonardo P. [The University of Texas M D Anderson Cancer Center, Department of Diagnostic Radiology, Unit 1473, Houston, TX (United States)

    2015-09-15

    To identify the distribution and frequency of computed tomography (CT) findings in patients with nosocomial rapidly growing mycobacterial (RGM) infection after laparoscopic surgery. A descriptive retrospective study in patients with RGM infection after laparoscopic surgery who underwent CT imaging prior to initiation of therapy. The images were analyzed by two radiologists in consensus, who evaluated the skin/subcutaneous tissues, the abdominal wall, and intraperitoneal region separately. The patterns of involvement were tabulated as: densification, collections, nodules (≥1.0 cm), small nodules (<1.0 cm), pseudocavitated nodules, and small pseudocavitated nodules. Twenty-six patients met the established criteria. The subcutaneous findings were: densification (88.5 %), small nodules (61.5 %), small pseudocavitated nodules (23.1 %), nodules (38.5 %), pseudocavitated nodules (15.4 %), and collections (26.9 %). The findings in the abdominal wall were: densification (61.5 %), pseudocavitated nodules (3.8 %), and collections (15.4 %). The intraperitoneal findings were: densification (46.1 %), small nodules (42.3 %), nodules (15.4 %), and collections (11.5 %). Subcutaneous CT findings in descending order of frequency were: densification, small nodules, nodules, small pseudocavitated nodules, pseudocavitated nodules, and collections. The musculo-fascial plane CT findings were: densification, collections, and pseudocavitated nodules. The intraperitoneal CT findings were: densification, small nodules, nodules, and collections. (orig.)

  10. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    Science.gov (United States)

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  11. Digital image processing of bone - Problems and potentials

    Science.gov (United States)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  12. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    Science.gov (United States)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk

    2007-02-01

    The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.

  13. Image processing for improved eye-tracking accuracy

    Science.gov (United States)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  14. Anomalous diffusion process applied to magnetic resonance image enhancement

    Science.gov (United States)

    Senra Filho, A. C. da S.; Garrido Salmon, C. E.; Murta Junior, L. O.

    2015-03-01

    Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central χ noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement.

  15. Anomalous diffusion process applied to magnetic resonance image enhancement.

    Science.gov (United States)

    Senra Filho, A C da S; Salmon, C E Garrido; Murta Junior, L O

    2015-03-21

    Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central χ noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 < q < 1.6, suggesting that the anomalous diffusion regime is more suitable for MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement.

  16. Multi-parton interactions and rapidity gap survival probability in jet-gap-jet processes

    Science.gov (United States)

    Babiarz, Izabela; Staszewski, Rafał; Szczurek, Antoni

    2017-08-01

    We discuss an application of dynamical multi-parton interaction model, tuned to measurements of underlying event topology, for a description of destroying rapidity gaps in the jet-gap-jet processes at the LHC. We concentrate on the dynamical origin of the mechanism of destroying the rapidity gap. The cross section for jet-gap-jet is calculated within LL BFKL approximation. We discuss the topology of final states without and with the MPI effects. We discuss some examples of selected kinematical situations (fixed jet rapidities and transverse momenta) as distributions averaged over the dynamics of the jet-gap-jet scattering. The colour-singlet ladder exchange amplitude for the partonic subprocess is implemented into the PYTHIA 8 generator, which is then used for hadronisation and for the simulation of the MPI effects. Several differential distributions are shown and discussed. We present the ratio of cross section calculated with and without MPI effects as a function of rapidity gap in between the jets.

  17. Using mind mapping techniques for rapid qualitative data analysis in public participation processes.

    Science.gov (United States)

    Burgess-Allen, Jilla; Owen-Smith, Vicci

    2010-12-01

    In a health service environment where timescales for patient participation in service design are short and resources scarce, a balance needs to be achieved between research rigour and the timeliness and utility of the findings of patient participation processes. To develop a pragmatic mind mapping approach to managing the qualitative data from patient participation processes. While this article draws on experience of using mind maps in a variety of participation processes, a single example is used to illustrate the approach. In this example mind maps were created during the course of patient participation focus groups. Two group discussions were also transcribed verbatim to allow comparison of the rapid mind mapping approach with traditional thematic analysis of qualitative data. The illustrative example formed part of a local alcohol service review which included consultation with local alcohol service users, their families and staff groups. The mind mapping approach provided a pleasing graphical format for representing the key themes raised during the focus groups. It helped stimulate and galvanize discussion and keep it on track, enhanced transparency and group ownership of the data analysis process, allowed a rapid dynamic between data collection and feedback, and was considerably faster than traditional methods for the analysis of focus groups, while resulting in similar broad themes. This study suggests that the use of a mind mapping approach to managing qualitative data can provide a pragmatic resolution of the tension between limited resources and quality in patient participation processes. © 2010 The Authors. Health Expectations © 2010 Blackwell Publishing Ltd.

  18. Empirical and mathematical model of rapid expansion of supercritical solution (RESS) process of acetaminophen

    Science.gov (United States)

    Kien, Le Anh

    2017-09-01

    Rapid Expansion of Supercritical Solutions (RESS) is a solvent-free technology to produce small solid particles with very narrow size distribution. RESS process is simple and easy to control in comparison with other methods based on supercritical techniques. In this study, the engineering of nano (or submicron) acetaminophen particles using rapid expansion CO2 supercritical solution (RESS) was investigated. Empirical model with response surface methodology was used to evaluate the effects of processing parameters, i.e. extraction temperature T (313-333 K), extraction pressure P (90-150 bar) and pre-expansion temperature Texp (353-373 K), on the size of precipitated acetaminophen particles. The results show that the smallest particle size, i.e. 52.08 nm can be achieved at 90 bar, 313 K and 353 K (P, T, Texp, respectively). To better understand and develop a mechanistic predictive tool for RESS process, a one dimensional steady flow model was used in this work to describe the subsonic expansion process inside the capillary nozzle and the supersonic expansion process outside expansion nozzle. It was shown that particle characteristics are governed by both operation parameters such as pre-expansion temperature, pre-expansion pressure, and expansion temperature. These parameters affects particle size in the same trend as that was found from experiment data and empirical model.

  19. [Rapid total body fat measurement by magnetic resonance imaging: quantification and topography].

    Science.gov (United States)

    Vogt, F M; Ruehm, S; Hunold, P; de Greiff, A; Nuefer, M; Barkhausen, J; Ladd, S C

    2007-05-01

    To evaluate a rapid and comprehensive MR protocol based on a T1-weighted sequence in conjunction with a rolling table platform for the quantification of total body fat. 11 healthy volunteers and 50 patients were included in the study. MR data was acquired on a 1.5-T system (Siemens Magnetom Sonata). An axial T1-weighted flash 2D sequence (TR 101, TE 4.7, FA 70, FOV 50 cm, 205 x 256 matrix, slice thickness: 10 mm, 10 mm interslice gap) was used for data acquisition. Patients were placed in a supine position on a rolling table platform capable of acquiring multiple consecutive data sets by pulling the patient through the isocenter of the magnet. Data sets extending from the upper to lower extremities were collected. The images were analyzed with respect to the amount of intraabdominal, subcutaneous and total abdominal fat by semi-automated image segmentation software that employs a contour-following algorithm. The obtained MR images were able to be evaluated for all volunteers and patients. Excellent correlation was found between whole body MRI results in volunteers with DEXA (r (2) = 0.95) and bioimpedance (r (2) = 0.89) measurements, while the correlation coefficient was 0.66 between MRI and BMI, indicating only moderate reliability of the BMI method. Variations in patients with respect to the amount of total, subcutaneous, and intraabdominal adipose tissue was not related to standard anthropometric measurements and metabolic lipid profiles (r (2) = 0,001 to 0.48). The results showed that there was a significant variation in intraabdominal adipose tissue which could not be predicted from the total body fat (r (2) = 0.14) or subcutaneous adipose tissue (r (2) = 0.04). Although no significant differences in BMI could be found between females and males (p = 0.26), females showed significantly higher total and subcutaneous abdominal adipose tissue (p < 0.05). This MR protocol can be used for the rapid and non-invasive quantification of body fat. The missing

  20. Integrating digital topology in image-processing libraries.

    Science.gov (United States)

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  1. Model-based estimation of breast percent density in raw and processed full-field digital mammography images from image-acquisition physics and patient-image characteristics

    Science.gov (United States)

    Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina

    2012-03-01

    Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, pimages. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, pimages. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.

  2. Effects of processing conditions on mammographic image quality.

    Science.gov (United States)

    Braeuning, M P; Cooper, H W; O'Brien, S; Burns, C B; Washburn, D B; Schell, M J; Pisano, E D

    1999-08-01

    Any given mammographic film will exhibit changes in sensitometric response and image resolution as processing variables are altered. Developer type, immersion time, and temperature have been shown to affect the contrast of the mammographic image and thus lesion visibility. The authors evaluated the effect of altering processing variables, including film type, developer type, and immersion time, on the visibility of masses, fibrils, and speaks in a standard mammographic phantom. Images of a phantom obtained with two screen types (Kodak Min-R and Fuji) and five film types (Kodak Min-R M, Min-R E, Min-R H; Fuji UM-MA HC, and DuPont Microvision-C) were processed with five different developer chemicals (Autex SE, DuPont HSD, Kodak RP, Picker 3-7-90, and White Mountain) at four different immersion times (24, 30, 36, and 46 seconds). Processor chemical activity was monitored with sensitometric strips, and developer temperatures were continuously measured. The film images were reviewed by two board-certified radiologists and two physicists with expertise in mammography quality control and were scored based on the visibility of calcifications, masses, and fibrils. Although the differences in the absolute scores were not large, the Kodak Min-R M and Fuji films exhibited the highest scores, and images developed in White Mountain and Autex chemicals exhibited the highest scores. For any film, several processing chemicals may be used to produce images of similar quality. Extended processing may no longer be necessary.

  3. Digital Signal Processing for Medical Imaging Using Matlab

    CERN Document Server

    Gopi, E S

    2013-01-01

    This book describes medical imaging systems, such as X-ray, Computed tomography, MRI, etc. from the point of view of digital signal processing. Readers will see techniques applied to medical imaging such as Radon transformation, image reconstruction, image rendering, image enhancement and restoration, and more. This book also outlines the physics behind medical imaging required to understand the techniques being described. The presentation is designed to be accessible to beginners who are doing research in DSP for medical imaging. Matlab programs and illustrations are used wherever possible to reinforce the concepts being discussed.  ·         Acts as a “starter kit” for beginners doing research in DSP for medical imaging; ·         Uses Matlab programs and illustrations throughout to make content accessible, particularly with techniques such as Radon transformation and image rendering; ·         Includes discussion of the basic principles behind the various medical imaging tec...

  4. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    Science.gov (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  5. Digital image processing for photo-reconnaissance applications

    Science.gov (United States)

    Billingsley, F. C.

    1972-01-01

    Digital image-processing techniques developed for processing pictures from NASA space vehicles are analyzed in terms of enhancement, quantitative restoration, and information extraction. Digital filtering, and the action of a high frequency filter in the real and Fourier domain are discussed along with color and brightness.

  6. Image processing system performance prediction and product quality evaluation

    Science.gov (United States)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  7. Choosing optimal rapid manufacturing process for thin-walled products using expert algorithm

    Directory of Open Access Journals (Sweden)

    Filip Gorski

    2010-10-01

    Full Text Available Choosing right Rapid Prototyping technology is not easy, especially for companies inexperienced with that group of manufacturing techniques. Paper summarizes research focused on creating an algorithm for expert system, helping to choose optimal process and determine its parameters for thin-walled products rapid manufacturing. Research was based upon trial manufacturing of different thin-walled items using various RP technologies. Products were categorized, each category was defined by a set of requirements. Basing on research outcome, main algorithm has been created. Next step was developing detailed algorithms for optimizing particular methods. Implementation of these algorithms brings huge benefit for recipients, including cost reduction, supply time decrease and improvements in information flow.

  8. Rapid Determination of Optimal Conditions in a Continuous Flow Reactor Using Process Analytical Technology

    Directory of Open Access Journals (Sweden)

    Michael F. Roberto

    2013-12-01

    Full Text Available Continuous flow reactors (CFRs are an emerging technology that offer several advantages over traditional batch synthesis methods, including more efficient mixing schemes, rapid heat transfer, and increased user safety. Of particular interest to the specialty chemical and pharmaceutical manufacturing industries is the significantly improved reliability and product reproducibility over time. CFR reproducibility can be attributed to the reactors achieving and maintaining a steady state once all physical and chemical conditions have stabilized. This work describes the implementation of a smart CFR with univariate physical and multivariate chemical monitoring that allows for rapid determination of steady state, requiring less than one minute. Additionally, the use of process analytical technology further enabled a significant reduction in the time and cost associated with offline validation methods. The technology implemented for this study is chemistry and hardware agnostic, making this approach a viable means of optimizing the conditions of any CFR.

  9. Digital image processing using parallel computing based on CUDA technology

    Science.gov (United States)

    Skirnevskiy, I. P.; Pustovit, A. V.; Abdrashitova, M. O.

    2017-01-01

    This article describes expediency of using a graphics processing unit (GPU) in big data processing in the context of digital images processing. It provides a short description of a parallel computing technology and its usage in different areas, definition of the image noise and a brief overview of some noise removal algorithms. It also describes some basic requirements that should be met by certain noise removal algorithm in the projection to computer tomography. It provides comparison of the performance with and without using GPU as well as with different percentage of using CPU and GPU.

  10. Computer image processing - The Viking experience. [digital enhancement techniques

    Science.gov (United States)

    Green, W. B.

    1977-01-01

    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  11. IDP: Image and data processing (software) in C++

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  12. Rapid Automated Dissolution and Analysis Techniques for Radionuclides in Recycle Process Streams

    Energy Technology Data Exchange (ETDEWEB)

    Sudowe, Ralf [Univ. of Nevada, Las Vegas, NV (United States). Radiochemistry Program and Health Physics Dept.; Roman, Audrey [Univ. of Nevada, Las Vegas, NV (United States). Radiochemistry Program; Dailey, Ashlee [Univ. of Nevada, Las Vegas, NV (United States). Radiochemistry Program; Go, Elaine [Univ. of Nevada, Las Vegas, NV (United States). Radiochemistry Program

    2013-07-18

    The analysis of process samples for radionuclide content is an important part of current procedures for material balance and accountancy in the different process streams of a recycling plant. The destructive sample analysis techniques currently available necessitate a significant amount of time. It is therefore desirable to develop new sample analysis procedures that allow for a quick turnaround time and increased sample throughput with a minimum of deviation between samples. In particular, new capabilities for rapid sample dissolution and radiochemical separation are required. Most of the radioanalytical techniques currently employed for sample analysis are based on manual laboratory procedures. Such procedures are time- and labor-intensive, and not well suited for situations in which a rapid sample analysis is required and/or large number of samples need to be analyzed. To address this issue we are currently investigating radiochemical separation methods based on extraction chromatography that have been specifically optimized for the analysis of process stream samples. The influence of potential interferences present in the process samples as well as mass loading, flow rate and resin performance is being studied. In addition, the potential to automate these procedures utilizing a robotic platform is evaluated. Initial studies have been carried out using the commercially available DGA resin. This resin shows an affinity for Am, Pu, U, and Th and is also exhibiting signs of a possible synergistic effects in the presence of iron.

  13. End mill tools integration in CNC machining for rapid manufacturing processes: simulation studies

    Directory of Open Access Journals (Sweden)

    Muhammed Nafis Osman Zahid

    2015-01-01

    Full Text Available Computer numerical controlled (CNC machining has been recognized as a manufacturing process that is capable of producing metal parts with high precision and reliable quality, whereas many additive manufacturing methods are less capable in these respects. The introduction of a new layer-removal methodology that utilizes an indexing device to clamp the workpiece can be used to extend CNC applications into the realm of rapid manufacturing (CNC-RM processes. This study aims to improve the implementation of CNC machining for RM by formulating a distinct approach to integrate end mill tools during finishing processes. A main objective is to enhance process efficiency by minimizing the staircasing effect of layer removal so as to improve the quality of machined parts. In order to achieve this, different types of end mill tools are introduced to cater for specific part surfaces during finishing operations. Virtual machining simulations are executed to verify the method and the implications. The findings indicate the advantages of the approach in terms of cutting time and excess volume left on the parts. It is shown that using different tools for finishing operations will improve the capabilities of CNC machining for rapid manufacturing applications.

  14. Performance Measure as Feedback Variable in Image Processing

    Directory of Open Access Journals (Sweden)

    Ristić Danijela

    2006-01-01

    Full Text Available This paper extends the view of image processing performance measure presenting the use of this measure as an actual value in a feedback structure. The idea behind is that the control loop, which is built in that way, drives the actual feedback value to a given set point. Since the performance measure depends explicitly on the application, the inclusion of feedback structures and choice of appropriate feedback variables are presented on example of optical character recognition in industrial application. Metrics for quantification of performance at different image processing levels are discussed. The issues that those metrics should address from both image processing and control point of view are considered. The performance measures of individual processing algorithms that form a character recognition system are determined with respect to the overall system performance.

  15. Design criteria for a multiple input land use system. [digital image processing techniques

    Science.gov (United States)

    Billingsley, F. C.; Bryant, N. A.

    1975-01-01

    A design is presented that proposes the use of digital image processing techniques to interface existing geocoded data sets and information management systems with thematic maps and remote sensed imagery. The basic premise is that geocoded data sets can be referenced to a raster scan that is equivalent to a grid cell data set, and that images taken of thematic maps or from remote sensing platforms can be converted to a raster scan. A major advantage of the raster format is that x, y coordinates are implicitly recognized by their position in the scan, and z values can be treated as Boolean layers in a three-dimensional data space. Such a system permits the rapid incorporation of data sets, rapid comparison of data sets, and adaptation to variable scales by resampling the raster scans.

  16. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    Science.gov (United States)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations

  17. Enhancement of structure images of interstellar diamond microcrystals by image processing

    Science.gov (United States)

    O'Keefe, Michael A.; Hetherington, Crispin; Turner, John; Blake, David; Freund, Friedemann

    1988-01-01

    Image processed high resolution TEM images of diamond crystals found in oxidized acid residues of carbonaceous chondrites are presented. Two models of the origin of the diamonds are discussed. The model proposed by Lewis et al. (1987) supposes that the diamonds formed under low pressure conditions, whereas that of Blake et al (1988) suggests that the diamonds formed due to particle-particle collisions behind supernova shock waves. The TEM images of the diamond presented support the high pressure model.

  18. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.

    Science.gov (United States)

    Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han

    2017-09-07

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  19. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing

    Directory of Open Access Journals (Sweden)

    Hyunjun Kim

    2017-09-01

    Full Text Available Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  20. Image processing techniques in 3-D foot shape measurement system

    Science.gov (United States)

    Liu, Guozhong; Li, Ping; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-10-01

    The 3-D foot-shape measurement system based on laser-line-scanning principle was designed and 3-D foot-shape measurements without blind areas and the automatic extraction of foot-parameters were achieved. The paper is focused on the study of the system structure and principle and image processing techniques. The key techniques related to the image processing for 3-D foot shape measurement system include laser stripe extraction, laser stripe coordinate transformation from CCD cameras image coordinates system to laser plane coordinates system, laser stripe assembly of eight CCD cameras and eliminating of image noise and disturbance. 3-D foot shape measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization and establishment of a feet database for consumers.

  1. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David

    2013-12-01

    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  2. Modular Scanning Confocal Microscope with Digital Image Processing.

    Science.gov (United States)

    Ye, Xianjun; McCluskey, Matthew D

    2016-01-01

    In conventional confocal microscopy, a physical pinhole is placed at the image plane prior to the detector to limit the observation volume. In this work, we present a modular design of a scanning confocal microscope which uses a CCD camera to replace the physical pinhole for materials science applications. Experimental scans were performed on a microscope resolution target, a semiconductor chip carrier, and a piece of etched silicon wafer. The data collected by the CCD were processed to yield images of the specimen. By selecting effective pixels in the recorded CCD images, a virtual pinhole is created. By analyzing the image moments of the imaging data, a lateral resolution enhancement is achieved by using a 20 × / NA = 0.4 microscope objective at 532 nm laser wavelength.

  3. Increased insight in microbial processes in rapid sandfilters in drinking water treatment (DW BIOFILTERS)

    DEFF Research Database (Denmark)

    Albrechtsen, Hans-Jørgen; Gülay, Arda; Lee, Carson

    2012-01-01

    The aim of this research project is to improve our knowledge on biological rapid sand filters as they are present in thousands groundwater based water works. This includes molecular investigations of the microorganisms responsible for the individual processes (e.g. nitrification); and detailed...... monitoring and experiments in the filters and laboratory to provide insight in the process mechanisms, kinetics and effect of environmental factors. Management of the filters (e.g. backwashing, flow rate, carrier type) will be investigated at pilot and full scale, supported by mathematical models...... investigated by deep sequencing. This will also contribute to a verification of whether the selected qPCR probes include all important groups. Filters from three water works have been sampled and are currently being processed to investigate depth profiles and horizontal variation in filters. Assays...

  4. Digital Image Processing Techniques to Create Attractive Astronomical Images from Research Data

    Science.gov (United States)

    Rector, T. A.; Levay, Z.; Frattare, L.; English, J.; Pu'uohau-Pummill, K.

    2004-05-01

    The quality of modern astronomical data, the power of modern computers and the agility of current image processing software enable the creation of high-quality images in a purely digital form that rival the quality of traditional photographic astronomical images. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways, it has led to a new philosophy towards how to create them. We present a practical guide to generate astronomical images from research data by using powerful image processing programs. These programs use a layering metaphor that allows an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. We also present a philosophy on how to use color and composition to create images that simultaneously highlight the scientific detail within an image and are aesthetically appealing. We advocate an approach that uses visual grammar, defined as the elements which affect the interpretation of an image, to maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage the viewer and keep him or her interested for a longer period of time. The effective use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.

  5. Establishing an international reference image database for research and development in medical image processing

    NARCIS (Netherlands)

    Horsch, A.D.; Prinz, M.; Schneider, S.; Sipilä, O; Spinnler, K.; Vallée, J-P; Verdonck-de Leeuw, I; Vogl, R.; Wittenberg, T.; Zahlmann, G.

    2004-01-01

    INTRODUCTION: The lack of comparability of evaluation results is one of the major obstacles of research and development in Medical Image Processing (MIP). The main reason for that is the usage of different image datasets with different quality, size and Gold standard. OBJECTIVES: Therefore, one of

  6. MATLAB-based Applications for Image Processing and Image Quality Assessment – Part II: Experimental Results

    Directory of Open Access Journals (Sweden)

    L. Krasula

    2012-04-01

    Full Text Available The paper provides an overview of some possible usage of the software described in the Part I. It contains the real examples of image quality improvement, distortion simulations, objective and subjective quality assessment and other ways of image processing that can be obtained by the individual applications.

  7. Image processing tool for automatic feature recognition and quantification

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  8. Assessment of banana fruit maturity by image processing technique

    OpenAIRE

    Surya Prabha, D.; J. Satheesh Kumar

    2013-01-01

    Maturity stage of fresh banana fruit is an important factor that affects the fruit quality during ripening and marketability after ripening. The ability to identify maturity of fresh banana fruit will be a great support for farmers to optimize harvesting phase which helps to avoid harvesting either under-matured or over-matured banana. This study attempted to use image processing technique to detect the maturity stage of fresh banana fruit by its color and size value of their images precisely...

  9. Detection of pitting corrosion in steel using image processing

    OpenAIRE

    Ghosh, Bidisha; Pakrashi, Vikram; Schoefs, Franck

    2010-01-01

    This paper presents an image processing based detection method for detecting pitting corrosion in steel structures. High Dynamic Range (HDR) imaging has been carried out in this regard to demonstrate the effectiveness of such relatively inexpensive techniques that are of immense benefit to Non – Destructive – Tesing (NDT) community. The pitting corrosion of a steel sample in marine environment is successfully detected in this paper using the proposed methodology. It is observed, that the prop...

  10. Method development for verification the completeancient statues by image processing

    Directory of Open Access Journals (Sweden)

    Natthariya Laopracha

    2015-06-01

    Full Text Available Ancient statues are cultural heritages that should be preserved and maintained. Nevertheless, such invaluable statues may be targeted by vandalism or burglary. In order to guard these statues by using image processing, this research aims to develop a technique for detecting images of ancient statues with missing parts using digital image processing. This paper proposed the effective feature extraction method for detecting images of damaged statues or statues with missing parts based on the Histogram Oriented Gradient (HOG technique, a popular method for object detection. Unlike the original HOG technique, the proposed method has improved the area scanning strategy that effectively extracts important features of statues. Results obtained from the proposed method were compared with those of the HOG method. The tested image dataset composed of 500 images of perfect statues and 500 images of statues with missing parts. The experimental results show that the proposed method yields 99.88% accuracy while the original HOG method gives the accuracy of only 84.86%.

  11. SENTINEL-2 LEVEL 1 PRODUCTS AND IMAGE PROCESSING PERFORMANCES

    Directory of Open Access Journals (Sweden)

    S. J. Baillarin

    2012-07-01

    Full Text Available In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES program, the European Space Agency (ESA is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km, a high revisit (5 days with two satellites, a high resolution (10 m, 20 m and 60 m and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains. In this context, the Centre National d'Etudes Spatiales (CNES supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes, the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands; and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame

  12. A Quality Sorting of Fruit Using a New Automatic Image Processing Method

    Science.gov (United States)

    Amenomori, Michihiro; Yokomizu, Nobuyuki

    This paper presents an innovative approach for quality sorting of objects such as apples sorting in an agricultural factory, using an image processing algorithm. The objective of our approach are; firstly to sort the objects by their colors precisely; secondly to detect any irregularity of the colors surrounding the apples efficiently. An experiment has been conducted and the results have been obtained and compared with that has been preformed by human sorting process and by color sensor sorting devices. The results demonstrate that our approach is capable to sort the objects rapidly and the percentage of classification valid rate was 100 %.

  13. Recent advances in rapid and non-destructive assessment of meat quality using hyperspectral imaging

    Science.gov (United States)

    Tao, Feifei; Ngadi, Michael

    2016-05-01

    Meat is an important food item in human diet. Its production and consumption has greatly increased in the last decades with the development of economies and improvement of peoples' living standards. However, most of the traditional methods for evaluation of meat quality are time-consuming, laborious, inconsistent and destructive to samples, which make them not appropriate for a fast-paced production and processing environment. Development of innovative and non-destructive optical sensing techniques to facilitate simple, fast, and accurate evaluation of quality are attracting increasing attention in the food industry. Hyperspectral imaging is one of the promising techniques. It integrates the combined merits of imaging and spectroscopic techniques. This paper provides a comprehensive review on recent advances in evaluation of the important quality attributes of meat including color, marbling, tenderness, pH, water holding capacity, and also chemical composition attributes such as moisture content, protein content and fat content in pork, beef and lamb. In addition, the future potential applications and trends of hyperspectral imaging are also discussed in this paper.

  14. Rapid image recognition of body parts scanned in computed tomography datasets.

    Science.gov (United States)

    Dicken, Volker; Lindow, B; Bornemann, L; Drexl, J; Nikoubashman, A; Peitgen, H-O

    2010-09-01

    Automatic CT dataset classification is important to efficiently create reliable database annotations, especially when large collections of scans must be analyzed. An automated segmentation and labeling algorithm was developed based on a fast patient segmentation and extraction of statistical density class features from the CT data. The method also delivers classifications of image noise level and patient size. The approach is based on image information only and uses an approximate patient contour detection and statistical features of the density distribution. These are obtained from a slice-wise analysis of the areas filled by various materials related to certain density classes and the spatial spread of each class. The resulting families of curves are subsequently classified using rules derived from knowledge about features of the human anatomy. The method was successfully applied to more than 5,000 CT datasets. Evaluation was performed via expert visual inspection of screenshots showing classification results and detected characteristic positions along the main body axis. Accuracy per body region was very satisfactory in the trunk (lung/liver >99.5% detection rate, presence of abdomen >97% or pelvis >95.8%) improvements are required for zoomed scans. The method performed very reliably. A test on 1,860 CT datasets collected from an oncological trial showed that the method is feasible, efficient, and is promising as an automated tool for image post-processing.

  15. Rapid tooling for functional prototyping of metal mold processes: Literature review on cast tooling

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, M.D. [Sandia National Labs., Albuquerque, NM (United States); Hochanadel, P.W. [Colorado School of Mines, Golden, CO (United States). Dept. of Metallurgical and Materials Engineering

    1995-11-01

    This report is a literature review on cast tooling with the general focus on AISI H13 tool steel. The review includes processing of both wrought and cast H13 steel along with the accompanying microstructures. Also included is the incorporation of new rapid prototyping technologies, such as Stereolithography and Selective Laser Sintering, into the investment casting of tool steel. The limiting property of using wrought or cast tool steel for die casting is heat checking. Heat checking is addressed in terms of testing procedures, theories regarding the mechanism, and microstructural aspects related to the cracking.

  16. Lessons from the masters current concepts in astronomical image processing

    CERN Document Server

    2013-01-01

    There are currently thousands of amateur astronomers around the world engaged in astrophotography at increasingly sophisticated levels. Their ranks far outnumber professional astronomers doing the same and their contributions both technically and artistically are the dominant drivers of progress in the field today. This book is a unique collaboration of individuals, all world-renowned in their particular area, and covers in detail each of the major sub-disciplines of astrophotography. This approach offers the reader the greatest opportunity to learn the most current information and the latest techniques directly from the foremost innovators in the field today.   The book as a whole covers all types of astronomical image processing, including processing of eclipses and solar phenomena, extracting detail from deep-sky, planetary, and widefield images, and offers solutions to some of the most challenging and vexing problems in astronomical image processing. Recognized chapter authors include deep sky experts su...

  17. An image-processing methodology for extracting bloodstain pattern features.

    Science.gov (United States)

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G

    2017-08-01

    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Image data processing system requirements study. Volume 1: Analysis. [for Earth Resources Survey Program

    Science.gov (United States)

    Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    Digital image processing, image recorders, high-density digital data recorders, and data system element processing for use in an Earth Resources Survey image data processing system are studied. Loading to various ERS systems is also estimated by simulation.

  19. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix

    2014-11-19

    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  20. Triple Bioluminescence Imaging for In Vivo Monitoring of Cellular Processes

    Directory of Open Access Journals (Sweden)

    Casey A Maguire

    2013-01-01

    Full Text Available Bioluminescence imaging (BLI has shown to be crucial for monitoring in vivo biological processes. So far, only dual bioluminescence imaging using firefly (Fluc and Renilla or Gaussia (Gluc luciferase has been achieved due to the lack of availability of other efficiently expressed luciferases using different substrates. Here, we characterized a codon-optimized luciferase from Vargula hilgendorfii (Vluc as a reporter for mammalian gene expression. We showed that Vluc can be multiplexed with Gluc and Fluc for sequential imaging of three distinct cellular phenomena in the same biological system using vargulin, coelenterazine, and D-luciferin substrates, respectively. We applied this triple imaging system to monitor the effect of soluble tumor necrosis factor-related apoptosis-inducing ligand (sTRAIL delivered using an adeno-associated viral vector (AAV on brain tumors in mice. Vluc imaging showed efficient sTRAIL gene delivery to the brain, while Fluc imaging revealed a robust antiglioma therapy. Further, nuclear factor-κB (NF-κB activation in response to sTRAIL binding to glioma cells death receptors was monitored by Gluc imaging. This work is the first demonstration of trimodal in vivo bioluminescence imaging and will have a broad applicability in many different fields including immunology, oncology, virology, and neuroscience.