WorldWideScience

Sample records for astronomical image processing

  1. Methods in Astronomical Image Processing

    Science.gov (United States)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  2. Penn State astronomical image processing system

    International Nuclear Information System (INIS)

    Truax, R.J.; Nousek, J.A.; Feigelson, E.D.; Lonsdale, C.J.

    1987-01-01

    The needs of modern astronomy for image processing set demanding standards in simultaneously requiring fast computation speed, high-quality graphic display, large data storage, and interactive response. An innovative image processing system was designed, integrated, and used; it is based on a supermicro architecture which is tailored specifically for astronomy, which provides a highly cost-effective alternative to the traditional minicomputer installation. The paper describes the design rationale, equipment selection, and software developed to allow other astronomers with similar needs to benefit from the present experience. 9 references

  3. Lessons from the masters current concepts in astronomical image processing

    CERN Document Server

    2013-01-01

    There are currently thousands of amateur astronomers around the world engaged in astrophotography at increasingly sophisticated levels. Their ranks far outnumber professional astronomers doing the same and their contributions both technically and artistically are the dominant drivers of progress in the field today. This book is a unique collaboration of individuals, all world-renowned in their particular area, and covers in detail each of the major sub-disciplines of astrophotography. This approach offers the reader the greatest opportunity to learn the most current information and the latest techniques directly from the foremost innovators in the field today.   The book as a whole covers all types of astronomical image processing, including processing of eclipses and solar phenomena, extracting detail from deep-sky, planetary, and widefield images, and offers solutions to some of the most challenging and vexing problems in astronomical image processing. Recognized chapter authors include deep sky experts su...

  4. Efficient morphological tools for astronomical image processing

    NARCIS (Netherlands)

    Moschini, Ugo

    2016-01-01

    Nowadays, many applications rely on a huge quantity of images at high resolution and with high quantity of information per pixel, due either to the technological improvements of the instruments or to the type of measurement observed. This thesis is focused on exploring and developing tools and new

  5. SIP: A Web-Based Astronomical Image Processing Program

    Science.gov (United States)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  6. Recent developments at JPL in the application of digital image processing techniques to astronomical images

    Science.gov (United States)

    Lorre, J. J.; Lynn, D. J.; Benton, W. D.

    1976-01-01

    Several techniques of a digital image-processing nature are illustrated which have proved useful in visual analysis of astronomical pictorial data. Processed digital scans of photographic plates of Stephans Quintet and NGC 4151 are used as examples to show how faint nebulosity is enhanced by high-pass filtering, how foreground stars are suppressed by linear interpolation, and how relative color differences between two images recorded on plates with different spectral sensitivities can be revealed by generating ratio images. Analyses are outlined which are intended to compensate partially for the blurring effects of the atmosphere on images of Stephans Quintet and to obtain more detailed information about Saturn's ring structure from low- and high-resolution scans of the planet and its ring system. The employment of a correlation picture to determine the tilt angle of an average spectral line in a low-quality spectrum is demonstrated for a section of the spectrum of Uranus.

  7. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  8. A New Effort for Atmospherical Forecast: Meteorological Image Processing Software (MIPS) for Astronomical Observations

    Science.gov (United States)

    Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.

    2016-12-01

    We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.

  9. Observatory Sponsoring Astronomical Image Contest

    Science.gov (United States)

    2005-05-01

    and to provide a showcase for a broad range of astronomical research and celestial objects," Adams added. In addition, NRAO is developing enhanced data visualization techniques and data-processing recipes to assist radio astronomers in making quality images and in combining radio data with data collected at other wavelengths, such as visible-light or infrared, to make composite images. "We encourage all our telescope users to take advantage of these techniques to showcase their research," said Juan Uson, a member of the NRAO scientific staff and the observatory's EPO scientist. "All these efforts should demonstrate the vital and exciting roles that radio telescopes, radio observers, and the NRAO play in modern astronomy," Lo said. "While we want to encourage images that capture the imagination, we also want to emphasize that extra effort invested in enhanced imagery also will certainly pay off scientifically, by revealing subtleties and details that may have great significance for our understanding of astronomical objects," he added. Details of the NRAO Image Contest, which will become an annual event, are on the observatory's Web site. The observatory will announce winners on October 15. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

  10. Preparing Colorful Astronomical Images II

    Science.gov (United States)

    Levay, Z. G.; Frattare, L. M.

    2002-12-01

    We present additional techniques for using mainstream graphics software (Adobe Photoshop and Illustrator) to produce composite color images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope to produce photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to present more detail and additional techniques, taking advantage of new or improved features available in the latest software versions. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels.

  11. Coronagraph for astronomical imaging and spectrophotometry

    Science.gov (United States)

    Vilas, Faith; Smith, Bradford A.

    1987-01-01

    A coronagraph designed to minimize scattered light in astronomical observations caused by the structure of the primary mirror, secondary mirror, and secondary support structure of a Cassegrainian telescope is described. Direct (1:1) and reducing (2.7:1) imaging of astronomical fields are possible. High-quality images are produced. The coronagraph can be used with either a two-dimensional charge-coupled device or photographic film camera. The addition of transmission dispersing optics converts the coronagraph into a low-resolution spectrograph. The instrument is modular and portable for transport to different observatories.

  12. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Directory of Open Access Journals (Sweden)

    J. Schindler

    2011-01-01

    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  13. Preparing Colorful Astronomical Images and Illustrations

    Science.gov (United States)

    Levay, Z. G.; Frattare, L. M.

    2001-12-01

    We present techniques for using mainstream graphics software, specifically Adobe Photoshop and Illustrator, for producing composite color images and illustrations from astronomical data. These techniques have been used with numerous images from the Hubble Space Telescope to produce printed and web-based news, education and public presentation products as well as illustrations for technical publication. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels. These features, along with its user-oriented, visual interface, provide convenient tools to produce high-quality, full-color images and graphics for printed and on-line publication and presentation.

  14. An infrared upconverter for astronomical imaging

    Science.gov (United States)

    Boyd, R. W.; Townes, C. H.

    1977-01-01

    An imaging upconverter has been constructed which is suitable for use in the study of the thermal 10-micron radiation from astronomical sources. The infrared radiation is converted to visible radiation by mixing in a 1-cm-long proustite crystal. The phase-matched 2-kayser bandpass is tunable from 9 to 11 microns. The conversion efficiency is 2 by 10 to the -7th power and the field of view of 40 arc seconds on the sky contains several hundred picture elements, approximately diffraction-limited resolution in a large telescope. The instrument has been used in studies of the sun, moon, Mercury, and VY Canis Majoris.

  15. Astronomers Discover Six-Image Gravitational Lens

    Science.gov (United States)

    2001-08-01

    An international team of astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) radio telescope and NASA's Hubble Space Telescope (HST) to discover the first gravitational lens in which the single image of a very distant galaxy has been split into six different images. The unique configuration is produced by the gravitational effect of three galaxies along the line of sight between the more-distant galaxy and Earth. Optical and Radio Images of Gravitational Lens "This is the first gravitational lens with more than four images of the background object that is produced by a small group of galaxies rather than a large cluster of galaxies," said David Rusin, who just received his Ph.D. from the University of Pennsylvania. "Such systems are expected to be extremely rare, so this discovery is an important stepping stone. Because this is an intermediate case between gravitational lenses produced by single galaxies and lenses produced by large clusters of galaxies, it will give us insights we can't get from other types of lenses," Rusin added. The gravitational lens, called CLASS B1359+154, consists of a galaxy more than 11 billion light-years away in the constellation Bootes, with a trio of galaxies more than 7 billion light-years away along the same line of sight. The more-distant galaxy shows signs that it contains a massive black hole at its core and also has regions in which new stars are forming. The gravitational effect of the intervening galaxies has caused the light and radio waves from the single, more-distant galaxy to be "bent" to form six images as seen from Earth. Four of these images appear outside the triangle formed by the three intermediate galaxies and two appear inside that triangle. "This lens system is a very interesting case to study because it is more complicated than lenses produced by single galaxies, and yet simpler than lenses produced by clusters of numerous galaxies," said Chris Kochanek of the Harvard

  16. Longwave Imaging for Astronomical Applications, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a compact portable longwave camera for astronomical applications. In Phase 1, we successfully developed the eye of the camera, i.e. the focal...

  17. Astronomers Make First Images With Space Radio Telescope

    Science.gov (United States)

    1997-07-01

    part of the VLBA instrument, was modified over the past four years to allow it to incorporate data from the satellite. Correlation of the observational data was completed successfully on June 12, after the exact timing of the satellite recording was established. Further computer processing produced an image of PKS 1519-273 -- the first image ever produced using a radio telescope in space. For Jim Ulvestad, the NRAO astronomer who made the first image, the success ended a long quest for this new capability. Ulvestad was involved in an experiment more than a decade ago in which a NASA communications satellite, TDRSS, was used to test the idea of doing radio astronomical imaging by combining data from space and ground radio telescopes. That experiment showed that an orbiting antenna could, in fact, work in conjunction with ground-based radio observatories, and paved the way for HALCA and a planned Russian radio astronomy satellite called RadioAstron. "This first image is an important technical milestone, and demonstrates the feasibility of a much more advanced mission, ARISE, currently being considered by NASA," Ulvestad said. The first image showed no structure in the object, even at the extremely fine level of detail achievable with HALCA; it is what astronomers call a "point source." This object also appears as a point source in all-ground-based observations. In addition, the 1986 TDRSS experiment observed the object, and, while this experiment did not produce an image, it indicated that PKS 1519-273 should be a point source. "This simple point image may not appear very impressive, but its beauty to us is that it shows our entire, complex system is functioning correctly. The system includes not only the orbiting and ground-based antennas, but also the orbit determination, tracking stations, the correlator, and the image-processing software," said Jonathan Romney, the NRAO astronomer who led the development of the VLBA correlator, and its enhancement to process data

  18. Unveiling galaxies the role of images in astronomical discovery

    CERN Document Server

    Roy, Jean-René

    2017-01-01

    Galaxies are known as the building blocks of the universe, but arriving at this understanding has been a thousand-year odyssey. This journey is told through the lens of the evolving use of images as investigative tools. Initial chapters explore how early insights developed in line with new methods of scientific imaging, particularly photography. The volume then explores the impact of optical, radio and x-ray imaging techniques. The final part of the story discusses the importance of atlases of galaxies; how astronomers organised images in ways that educated, promoted ideas and pushed for new knowledge. Images that created confusion as well as advanced knowledge are included to demonstrate the challenges faced by astronomers and the long road to understanding galaxies. By examining developments in imaging, this text places the study of galaxies in its broader historical context, contributing to both astronomy and the history of science.

  19. Spectroscopy for amateur astronomers recording, processing, analysis and interpretation

    CERN Document Server

    Trypsteen , Marc F M

    2017-01-01

    This accessible guide presents the astrophysical concepts behind astronomical spectroscopy, covering both the theory and the practical elements of recording, processing, analysing and interpreting your spectra. It covers astronomical objects, such as stars, planets, nebulae, novae, supernovae, and events such as eclipses and comet passages. Suitable for anyone with only a little background knowledge and access to amateur-level equipment, the guide's many illustrations, sketches and figures will help you understand and practise this scientifically important and growing field of amateur astronomy, up to the level of Pro-Am collaborations. Accessible to non-academics, it benefits many groups from novices and learners in astronomy clubs, to advanced students and teachers of astrophysics. This volume is the perfect companion to the Spectral Atlas for Amateur Astronomers, which provides detailed commented spectral profiles of more than 100 astronomical objects.

  20. Breakthrough! 100 astronomical images that changed the world

    CERN Document Server

    Gendler, Robert

    2015-01-01

    This unique volume by two renowned astrophotographers unveils the science and history behind 100 of the most significant astronomical images of all time. The authors have carefully selected their list of images from across time and technology to bring to the reader the most relevant photographic images spanning all eras of modern astronomical history.    Based on scientific evidence today we have a basic notion of how Earth and the universe came to be. The road to this knowledge was paved with 175 years of astronomical images acquired by the coupling of two revolutionary technologies – the camera and telescope. With ingenuity and determination humankind would quickly embrace these technologies to tell the story of the cosmos and unravel its mysteries.   This book presents in pictures and words a photographic chronology of our aspiration to understand the universe. From the first fledgling attempts to photograph the Moon, planets, and stars to the marvels of orbiting observatories that record the cosmos a...

  1. Astronomical Polarimetry with the RIT Polarization Imaging Camera

    Science.gov (United States)

    Vorobiev, Dmitry V.; Ninkov, Zoran; Brock, Neal

    2018-06-01

    In the last decade, imaging polarimeters based on micropolarizer arrays have been developed for use in terrestrial remote sensing and metrology applications. Micropolarizer-based sensors are dramatically smaller and more mechanically robust than other polarimeters with similar spectral response and snapshot capability. To determine the suitability of these new polarimeters for astronomical applications, we developed the RIT Polarization Imaging Camera to investigate the performance of these devices, with a special attention to the low signal-to-noise regime. We characterized the device performance in the lab, by determining the relative throughput, efficiency, and orientation of every pixel, as a function of wavelength. Using the resulting pixel response model, we developed demodulation procedures for aperture photometry and imaging polarimetry observing modes. We found that, using the current calibration, RITPIC is capable of detecting polarization signals as small as ∼0.3%. The relative ease of data collection, calibration, and analysis provided by these sensors suggest than they may become an important tool for a number of astronomical targets.

  2. Deconvolution of astronomical images using SOR with adaptive relaxation.

    Science.gov (United States)

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  3. IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING

    Energy Technology Data Exchange (ETDEWEB)

    Erwin, Peter [Max-Planck-Insitut für Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching, GermanyAND (Germany); Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München (Germany)

    2015-02-01

    I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.

  4. Creating and enhancing digital astro images a guide for practical astronomers

    CERN Document Server

    Privett, Grant

    2007-01-01

    This book clearly examines how to create the best astronomical images possible with a digital camera. It reveals the astonishing images that can be obtained with simple equipment, the right software, and knowledge of how to use it.

  5. High Energy Astronomical Data Processing and Analysis via the Internet

    Science.gov (United States)

    Valencic, Lynne A.; Snowden, S.; Pence, W.

    2012-01-01

    The HEASARC at NASA Goddard Space Flight Center and the US XMM-Newton GOF has developed Hera, a data processing facility for analyzing high energy astronomical data over the internet. Hera provides all the disk space and computing resources needed to do general processing of and advanced research on publicly available data from High Energy Astrophysics missions. The data and data products are kept on a server at GSFC and can be downloaded to a user's local machine. Further, the XMM-GOF has developed scripts to streamline XMM data reduction. These are available through Hera, and can also be downloaded to a user's local machine. These are free services provided to students, educators, and researchers for educational and research purposes.

  6. Block iterative restoration of astronomical images with the massively parallel processor

    International Nuclear Information System (INIS)

    Heap, S.R.; Lindler, D.J.

    1987-01-01

    A method is described for algebraic image restoration capable of treating astronomical images. For a typical 500 x 500 image, direct algebraic restoration would require the solution of a 250,000 x 250,000 linear system. The block iterative approach is used to reduce the problem to solving 4900 121 x 121 linear systems. The algorithm was implemented on the Goddard Massively Parallel Processor, which can solve a 121 x 121 system in approximately 0.06 seconds. Examples are shown of the results for various astronomical images

  7. UKRVO Astronomical WEB Services

    Directory of Open Access Journals (Sweden)

    Mazhaev, O.E.

    2017-01-01

    Full Text Available Ukraine Virtual Observatory (UkrVO has been a member of the International Virtual Observatory Alliance (IVOA since 2011. The virtual observatory (VO is not a magic solution to all problems of data storing and processing, but it provides certain standards for building infrastructure of astronomical data center. The astronomical databases help data mining and offer to users an easy access to observation metadata, images within celestial sphere and results of image processing. The astronomical web services (AWS of UkrVO give to users handy tools for data selection from large astronomical catalogues for a relatively small region of interest in the sky. Examples of the AWS usage are showed.

  8. FITS Liberator: Image processing software

    Science.gov (United States)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  9. PySE: Python Source Extractor for radio astronomical images

    Science.gov (United States)

    Spreeuw, Hanno; Swinbank, John; Molenaar, Gijs; Staley, Tim; Rol, Evert; Sanders, John; Scheers, Bart; Kuiack, Mark

    2018-05-01

    PySE finds and measures sources in radio telescope images. It is run with several options, such as the detection threshold (a multiple of the local noise), grid size, and the forced clean beam fit, followed by a list of input image files in standard FITS or CASA format. From these, PySe provides a list of found sources; information such as the calculated background image, source list in different formats (e.g. text, region files importable in DS9), and other data may be saved. PySe can be integrated into a pipeline; it was originally written as part of the LOFAR Transient Detection Pipeline (TraP, ascl:1412.011).

  10. PC image processing

    International Nuclear Information System (INIS)

    Hwa, Mok Jin Il; Am, Ha Jeng Ung

    1995-04-01

    This book starts summary of digital image processing and personal computer, and classification of personal computer image processing system, digital image processing, development of personal computer and image processing, image processing system, basic method of image processing such as color image processing and video processing, software and interface, computer graphics, video image and video processing application cases on image processing like satellite image processing, color transformation of image processing in high speed and portrait work system.

  11. Near-infrared spectral imaging Michelson interferometer for astronomical applications

    Science.gov (United States)

    Wells, C. W.; Potter, A. E.; Morgan, T. H.

    1980-01-01

    The design and operation of an imaging Michelson interferometer-spectrometer used for near-infrared (0.8 micron to 2.5 microns) spectral imaging are reported. The system employs a rapid scan interferometer modified for stable low resolution (250/cm) performance and a 42 element PbS linear detector array. A microcomputer system is described which provides data acquisition, coadding, and Fourier transformation for near real-time presentation of the spectra of all 42 scene elements. The electronic and mechanical designs are discussed and telescope performance data presented.

  12. Hexabundles: imaging fibre arrays for low-light astronomical applications

    DEFF Research Database (Denmark)

    Bland-Hawthorn, Joss; Bryant, Julie; Robertson, Gordon

    2010-01-01

    We demonstrate for the first time an imaging fibre bundle (“hexabundle”) that is suitable for low-light applications in astronomy. The most successful survey instruments at optical-infrared wavelengths today have obtained data on up to a million celestial sources using hundreds of multimode fibre...

  13. Imaging the Southern Sky An Amateur Astronomer's Guide

    CERN Document Server

    Chadwick, Stephen

    2012-01-01

    "If you're looking for a handy reference guide to help you image and explore the many splendors of the southern sky, Imaging the Southern Sky is the book for you. The work features not only stunning color images, all taken by Stephen Chadwick, of the best galaxies, nebulae, and clusters available to astrophotographers, but also lesser-known objects, some of which have gone largely unexplored! Beginners and experienced observers alike should appreciate the book's remarkable imagery and simple text, which provides concise and accurate information on each object and its epoch 2000.0 position, and also expert testimony on its visual nature. Each object essay also includes a section on technical information that should help astrophotographers in their planning, including telescope aperture, focal length and ratio, camera used, exposure times, and field size. As a charming bonus, the authors have taken the liberty to name many of the lesser-known objects to reflect their New Zealand heritage. Constellation by con...

  14. Advanced methods and algorithm for high precision astronomical imaging

    International Nuclear Information System (INIS)

    Ngole-Mboula, Fred-Maurice

    2016-01-01

    One of the biggest challenges of modern cosmology is to gain a more precise knowledge of the dark energy and the dark matter nature. Fortunately, the dark matter can be traced directly through its gravitational effect on galaxies shapes. The European Spatial Agency Euclid mission will precisely provide data for such a purpose. A critical step is analyzing these data will be to accurately model the instrument Point Spread Function (PSF), which the focus of this thesis.We developed non parametric methods to reliably estimate the PSFs across an instrument field-of-view, based on unresolved stars images and accounting for noise, under sampling and PSFs spatial variability. At the core of these contributions, modern mathematical tools and concepts such as sparsity. An important extension of this work will be to account for the PSFs wavelength dependency. (author) [fr

  15. Viewing and imaging the solar system a guide for amateur astronomers

    CERN Document Server

    Clark, Jane

    2015-01-01

    Viewing and Imaging the Solar System: A Guide for Amateur Astronomers is for those who want to develop their ability to observe and image Solar System objects, including the planets and moons, the Sun, and comets and asteroids. They might be beginners, or they may have already owned and used an astronomical telescope for a year or more. Newcomers are almost always wowed by sights such as the rings of Saturn and the moons of Jupiter, but have little idea how to find these objects for themselves (with the obvious exceptions of the Sun and Moon). They also need guidance about what equipment to use, besides a telescope. This book is written by an expert on the Solar System, who has had a lot of experience with outreach programs, which teach others how to make the most of relatively simple and low-cost equipment. That does not mean that this book is not for serious amateurs. On the contrary, it is designed to show amateur astronomers, in a relatively light-hearted—and math-free way—how to become serious.

  16. ImageX: new and improved image explorer for astronomical images and beyond

    Science.gov (United States)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan

  17. Novel optical designs for consumer astronomical telescopes and their application to professional imaging

    Science.gov (United States)

    Wise, Peter; Hodgson, Alan

    2006-06-01

    Since the launch of the Hubble Space Telescope there has been widespread popular interest in astronomy. A further series of events, most notably the recent Deep Impact mission and Mars oppositions have served to fuel further interest. As a result more and more amateurs are coming into astronomy as a practical hobby. At the same time more sophisticated optical equipment is becoming available as the price to performance ratio become more favourable. As a result larger and better optical telescopes are now in use by amateurs. We also have the explosive growth in digital imaging technologies. In addition to displacing photographic film as the preferred image capture modality it has made the capture of high quality astronomical imagery more accessible to a wider segment of the astronomy community. However, this customer requirement has also had an impact on telescope design. There has become a greater imperative for wide flat image fields in these telescopes to take advantage of the ongoing advances in CCD imaging technology. As a result of these market drivers designers of consumer astronomical telescopes are now producing state of the art designs that result in wide, flat fields with optimal spatial and chromatic aberrations. Whilst some of these designs are not scalable to the larger apertures required for professional ground and airborne telescope use there are some that are eminently suited to make this transition.

  18. Realization of High Dynamic Range Imaging in the GLORIA Network and Its Effect on Astronomical Measurement

    Directory of Open Access Journals (Sweden)

    Stanislav Vítek

    2016-01-01

    Full Text Available Citizen science project GLORIA (GLObal Robotic-telescopes Intelligent Array is a first free- and open-access network of robotic telescopes in the world. It provides a web-based environment where users can do research in astronomy by observing with robotic telescopes and/or by analyzing data that other users have acquired with GLORIA or from other free-access databases. Network of 17 telescopes allows users to control selected telescopes in real time or schedule any more demanding observation. This paper deals with new opportunity that GLORIA project provides to teachers and students of various levels of education. At the moment, there are prepared educational materials related to events like Sun eclipse (measuring local atmosphere changes, Aurora Borealis (calculation of Northern Lights height, or transit of Venus (measurement of the Earth-Sun distance. Student should be able to learn principles of CCD imaging, spectral analysis, basic calibration like dark frames subtraction, or advanced methods of noise suppression. Every user of the network can design his own experiment. We propose advanced experiment aimed at obtaining astronomical image data with high dynamic range. We also introduce methods of objective image quality evaluation in order to discover how HDR methods are affecting astronomical measurements.

  19. Selections from 2017: Image Processing with AstroImageJ

    Science.gov (United States)

    Kohler, Susanna

    2017-12-01

    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry

  20. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    Science.gov (United States)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  1. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series

    Science.gov (United States)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-12-01

    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  2. Markov Processes in Image Processing

    Science.gov (United States)

    Petrov, E. P.; Kharina, N. L.

    2018-05-01

    Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.

  3. Image perception and image processing

    Energy Technology Data Exchange (ETDEWEB)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures.

  4. Image perception and image processing

    International Nuclear Information System (INIS)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures. (orig.) [de

  5. Astronomical and Meteoritic Evidence for the Nature of Interstellar Dust and Its Processing in Protoplanetary Disks

    Science.gov (United States)

    Alexander, C. M. O'd.; Boss, A. P.; Keller, L. P.; Nuth, J. A.; Weinberger, A.

    Here we compare the astronomical and meteoritic evidence for the nature and origin of interstellar dust, and how it is processed in protoplanetary disks. The relative abundances of circumstellar grains in meteorites and interplanetary dust particles (IDPs) are broadly consistent with most astronomical estimates of galactic dust production, although graphite/amorphous C is highly underabundant. The major carbonaceous component in meteorites and IDPs is an insoluble organic material (IOM) that probably formed in the interstellar medium, but a solar origin cannot be ruled out. GEMS (glass with embedded metal and sulfide) that are isotopically solar within error are the best candidates for interstellar silicates, but it is also possible that they are solar system condensates. No dust from young stellar objects has been identified in IDPs, but it is difficult to differentiate them from solar system material or indeed some circumstellar condensates. The crystalline silicates in IDPs are mostly solar condensates, with lesser amounts of annealed GEMS. The IOM abundances in IDPs are roughly consistent with the degree of processing indicated by their crystallinity if the processed material was ISM dust. The IOM contents of meteorites are much lower, suggesting that there was a gradient in dust processing in the solar system. The microstructure of much of the pyroxene in IDPs suggests that it formed at temperatures >1258 K and cooled relatively rapidly (~1000 K/h). This cooling rate favors shock heating rather than radial transport of material annealed in the hot inner disk as the mechanism for producing crystalline dust in comets and IDPs. Shock heating is also a likely mechanism for producing chondrules in meteorites, but the dust was probably heated at a different time and/or location to chondrules.

  6. Super resolution for astronomical observations

    Science.gov (United States)

    Li, Zhan; Peng, Qingyu; Bhanu, Bir; Zhang, Qingfeng; He, Haifeng

    2018-05-01

    In order to obtain detailed information from multiple telescope observations a general blind super-resolution (SR) reconstruction approach for astronomical images is proposed in this paper. A pixel-reliability-based SR reconstruction algorithm is described and implemented, where the developed process incorporates flat field correction, automatic star searching and centering, iterative star matching, and sub-pixel image registration. Images captured by the 1-m telescope at Yunnan Observatory are used to test the proposed technique. The results of these experiments indicate that, following SR reconstruction, faint stars are more distinct, bright stars have sharper profiles, and the backgrounds have higher details; thus these results benefit from the high-precision star centering and image registration provided by the developed method. Application of the proposed approach not only provides more opportunities for new discoveries from astronomical image sequences, but will also contribute to enhancing the capabilities of most spatial or ground-based telescopes.

  7. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  8. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.

    2018-01-09

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  9. Perspectives of intellectual processing of large volumes of astronomical data using neural networks

    Science.gov (United States)

    Gorbunov, A. A.; Isaev, E. A.; Samodurov, V. A.

    2018-01-01

    In the process of astronomical observations vast amounts of data are collected. BSA (Big Scanning Antenna) LPI used in the study of impulse phenomena, daily logs 87.5 GB of data (32 TB per year). This data has important implications for both short-and long-term monitoring of various classes of radio sources (including radio transients of different nature), monitoring the Earth’s ionosphere, the interplanetary and the interstellar plasma, the search and monitoring of different classes of radio sources. In the framework of the studies discovered 83096 individual pulse events (in the interval of the study highlighted July 2012 - October 2013), which may correspond to pulsars, twinkling springs, and a rapid radio transients. Detected impulse events are supposed to be used to filter subsequent observations. The study suggests approach, using the creation of the multilayered artificial neural network, which processes the input raw data and after processing, by the hidden layer, the output layer produces a class of impulsive phenomena.

  10. Processing of medical images

    International Nuclear Information System (INIS)

    Restrepo, A.

    1998-01-01

    Thanks to the innovations in the technology for the processing of medical images, to the high development of better and cheaper computers, and, additionally, to the advances in the systems of communications of medical images, the acquisition, storage and handling of digital images has acquired great importance in all the branches of the medicine. It is sought in this article to introduce some fundamental ideas of prosecution of digital images that include such aspects as their representation, storage, improvement, visualization and understanding

  11. A convergent blind deconvolution method for post-adaptive-optics astronomical imaging

    International Nuclear Information System (INIS)

    Prato, M; Camera, A La; Bertero, M; Bonettini, S

    2013-01-01

    In this paper, we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback–Leibler (KL) divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is non-convex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of a fixed number of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. Therefore, the method is similar to other proposed methods based on the Richardson–Lucy (RL) algorithm, with SGP replacing RL. The use of SGP has two advantages: first, it allows one to prove global convergence of the blind method; secondly, it allows the introduction of different constraints on the object and the PSF. The specific constraint on the PSF, besides non-negativity and normalization, is an upper bound derived from the so-called Strehl ratio (SR), which is the ratio between the peak value of an aberrated versus a perfect wavefront. Therefore, a typical application, but not a unique one, is to the imaging of modern telescopes equipped with adaptive optics systems for the partial correction of the aberrations due to atmospheric turbulence. In the paper, we describe in detail the algorithm and we recall the results leading to its convergence. Moreover, we illustrate its effectiveness by means of numerical experiments whose results indicate that the method, pushed to convergence, is very promising in the reconstruction of non-dense stellar clusters. The case of more complex astronomical targets

  12. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    Completely self-contained-and heavily illustrated-this introduction to basic concepts and methodologies for digital image processing is written at a level that truly is suitable for seniors and first...

  13. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    ...-year graduate students in almost any technical discipline. The leading textbook in its field for more than twenty years, it continues its cutting-edge focus on contemporary developments in all mainstream areas of image processing-e.g...

  14. Medical image processing

    CERN Document Server

    Dougherty, Geoff

    2011-01-01

    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  15. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin

    2011-01-01

    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  16. Astronomical imaging with a low temperature InSb charge injection device (CID)

    International Nuclear Information System (INIS)

    Rouan, D.; Lacombe, F.; Tiphene, D.; Stefanovitch, D.; Phan van, D.

    1986-01-01

    InSb charge injection device (CID) technology focal plane arrays employ two coupled MIS capacitors which collect and store photon-generated charge carriers. Attention is presently given to two-dimensional arrays for 77 K and 4 K operating temperatures in astronomical applications; two such prototypes for ground observations have been developed for use with a 2-m telescope. A CID InSb array is noted to be a useful candidate for the proposed IR Space Observatory's focal plane camera. 7 references

  17. Astronomical Cybersketching

    CERN Document Server

    Grego, Peter

    2009-01-01

    Outlines the techniques involved in making observational sketches and more detailed 'scientific' drawings of a wide variety of astronomical subjects using modern digital equipment; primarily PDAs and tablet PCs. This book also discusses about choosing hardware and software

  18. Astro-imaging projects for amateur astronomers a maker’s guide

    CERN Document Server

    Chung, Jim

    2015-01-01

    This is the must-have guide for all amateur astronomers who double as makers, doers, tinkerers, problem-solvers, and inventors. In a world where an amateur astronomy habit can easily run into the many thousands of dollars, it is still possible for practitioners to get high-quality results and equipment on a budget by utilizing DIY techniques. Surprisingly, it's not that hard to modify existing equipment to get new and improved usability from older or outdated technology, creating an end result that can outshine the pricey higher-end tools. All it takes is some elbow grease, a creative and open mind and the help of Chung's hard-won knowledge on building and modifying telescopes and cameras. With this book, it is possible for readers to improve their craft, making their equipment more user friendly. The tools are at hand, and the advice on how to do it is here. Readers will discover a comprehensive presentation of astronomical projects that any amateur on any budget can replicate – projects that utilize lead...

  19. Image processing in radiology

    International Nuclear Information System (INIS)

    Dammann, F.

    2002-01-01

    Medical imaging processing and analysis methods have significantly improved during recent years and are now being increasingly used in clinical applications. Preprocessing algorithms are used to influence image contrast and noise. Three-dimensional visualization techniques including volume rendering and virtual endoscopy are increasingly available to evaluate sectional imaging data sets. Registration techniques have been developed to merge different examination modalities. Structures of interest can be extracted from the image data sets by various segmentation methods. Segmented structures are used for automated quantification analysis as well as for three-dimensional therapy planning, simulation and intervention guidance, including medical modelling, virtual reality environments, surgical robots and navigation systems. These newly developed methods require specialized skills for the production and postprocessing of radiological imaging data as well as new definitions of the roles of the traditional specialities. The aim of this article is to give an overview of the state-of-the-art of medical imaging processing methods, practical implications for the ragiologist's daily work and future aspects. (orig.) [de

  20. Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers

    Science.gov (United States)

    Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.

    2018-04-01

    Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.

  1. Processing Of Binary Images

    Science.gov (United States)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  2. The Groningen image processing system

    International Nuclear Information System (INIS)

    Allen, R.J.; Ekers, R.D.; Terlouw, J.P.

    1985-01-01

    This paper describes an interactive, integrated software and hardware computer system for the reduction and analysis of astronomical images. A short historical introduction is presented before some examples of the astonomical data currently handled by the system are shown. A description is given of the present hardware and software structure. The system is illustrated by describing its appearance to the user, to the applications programmer, and to the system manager. Some quantitative information on the size and cost of the system is given, and its good and bad features are discussed

  3. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo

    2016-01-01

    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  4. One-Shot Color Astronomical Imaging In Less Time, For Less Money!

    CERN Document Server

    Kennedy, L A

    2012-01-01

    Anyone who has seen recent pictures of the many wondrous objects in space has surely been amazed by the stunning color images. Trying to capture images like these through your own telescope has always seemed too time-consuming, expensive, and complicated. However, with improvements in affordable, easy-to-use CCD imaging technology, you can now capture amazing images yourself. With today's improved "one-shot" color imagers, high-quality images can be taken in a fraction of the time and at a fraction of the cost, right from your own backyard. This book will show you how to harness the power of today's computerized telescopes and entry-level imagers to capture spectacular images that you can share with family and friends. It covers such topics as - evaluating your existing equipment, choosing the right imager, finding targets to image, telescope alignment, focusing and framing the image, exposure times, aligning and stacking multiple frames, image calibration, and enhancement techniques! - how to expand the numb...

  5. Introduction to computer image processing

    Science.gov (United States)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  6. Astronomical Ecosystems

    Science.gov (United States)

    Neuenschwander, D. E.; Finkenbinder, L. R.

    2004-05-01

    Just as quetzals and jaguars require specific ecological habitats to survive, so too must planets occupy a tightly constrained astronomical habitat to support life as we know it. With this theme in mind we relate the transferable features of our elementary astronomy course, "The Astronomical Basis of Life on Earth." Over the last five years, in a team-taught course that features a spring break field trip to Costa Rica, we have introduced astronomy through "astronomical ecosystems," emphasizing astronomical constraints on the prospects for life on Earth. Life requires energy, chemical elements, and long timescales, and we emphasize how cosmological, astrophysical, and geological realities, through stabilities and catastrophes, create and eliminate niches for biological life. The linkage between astronomy and biology gets immediate and personal: for example, studies in solar energy production are followed by hikes in the forest to examine the light-gathering strategies of photosynthetic organisms; a lesson on tides is conducted while standing up to our necks in one on a Pacific beach. Further linkages between astronomy and the human timescale concerns of biological diversity, cultural diversity, and environmental sustainability are natural and direct. Our experience of teaching "astronomy as habitat" strongly influences our "Astronomy 101" course in Oklahoma as well. This "inverted astrobiology" seems to transform our student's outlook, from the universe being something "out there" into something "we're in!" We thank the SNU Science Alumni support group "The Catalysts," and the SNU Quetzal Education and Research Center, San Gerardo de Dota, Costa Rica, for their support.

  7. Introduction to digital image processing

    CERN Document Server

    Pratt, William K

    2013-01-01

    CONTINUOUS IMAGE CHARACTERIZATION Continuous Image Mathematical Characterization Image RepresentationTwo-Dimensional SystemsTwo-Dimensional Fourier TransformImage Stochastic CharacterizationPsychophysical Vision Properties Light PerceptionEye PhysiologyVisual PhenomenaMonochrome Vision ModelColor Vision ModelPhotometry and ColorimetryPhotometryColor MatchingColorimetry ConceptsColor SpacesDIGITAL IMAGE CHARACTERIZATION Image Sampling and Reconstruction Image Sampling and Reconstruction ConceptsMonochrome Image Sampling SystemsMonochrome Image Reconstruction SystemsColor Image Sampling SystemsImage QuantizationScalar QuantizationProcessing Quantized VariablesMonochrome and Color Image QuantizationDISCRETE TWO-DIMENSIONAL LINEAR PROCESSING Discrete Image Mathematical Characterization Vector-Space Image RepresentationGeneralized Two-Dimensional Linear OperatorImage Statistical CharacterizationImage Probability Density ModelsLinear Operator Statistical RepresentationSuperposition and ConvolutionFinite-Area Superp...

  8. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  9. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  10. Large Data at Small Universities: Astronomical processing using a computer classroom

    Science.gov (United States)

    Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen

    2016-06-01

    The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.

  11. Astronomical optics

    CERN Document Server

    Schroeder, Daniel J

    1988-01-01

    Written by a recognized expert in the field, this clearly presented, well-illustrated book provides both advanced level students and professionals with an authoritative, thorough presentation of the characteristics, including advantages and limitations, of telescopes and spectrographic instruments used by astronomers of today.Key Features* Written by a recognized expert in the field* Provides both advanced level students and professionals with an authoritative, thorough presentation of the characteristics, including advantages and limitations, of telescopes and spectrographic i

  12. Image processing and recognition for biological images.

    Science.gov (United States)

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  13. Distributed Read-out Imaging Device array for astronomical observations in UV/VIS

    NARCIS (Netherlands)

    Hijmering, R.A.

    2009-01-01

    STJ (Superconducting Tunneling Junctions) are being developed as spectro-photometers in wavelengths ranging from the NIR to X-rays. 10x12 arrays of STJs have already been successfully used as optical imaging spectrometers with the S-Cam 3, on the William Hershel Telescope on La Palma and on the

  14. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier

    2013-01-01

    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  15. Processing Visual Images

    International Nuclear Information System (INIS)

    Litke, Alan

    2006-01-01

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  16. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R

    1996-01-01

    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  17. Laser Guidestar Satellite for Ground-based Adaptive Optics Imaging of Geosynchronous Satellites and Astronomical Targets

    Science.gov (United States)

    Marlow, W. A.; Cahoy, K.; Males, J.; Carlton, A.; Yoon, H.

    2015-12-01

    Real-time observation and monitoring of geostationary (GEO) satellites with ground-based imaging systems would be an attractive alternative to fielding high cost, long lead, space-based imagers, but ground-based observations are inherently limited by atmospheric turbulence. Adaptive optics (AO) systems are used to help ground telescopes achieve diffraction-limited seeing. AO systems have historically relied on the use of bright natural guide stars or laser guide stars projected on a layer of the upper atmosphere by ground laser systems. There are several challenges with this approach such as the sidereal motion of GEO objects relative to natural guide stars and limitations of ground-based laser guide stars; they cannot be used to correct tip-tilt, they are not point sources, and have finite angular sizes when detected at the receiver. There is a difference between the wavefront error measured using the guide star compared with the target due to cone effect, which also makes it difficult to use a distributed aperture system with a larger baseline to improve resolution. Inspired by previous concepts proposed by A.H. Greenaway, we present using a space-based laser guide starprojected from a satellite orbiting the Earth. We show that a nanosatellite-based guide star system meets the needs for imaging GEO objects using a low power laser even from 36,000 km altitude. Satellite guide star (SGS) systemswould be well above atmospheric turbulence and could provide a small angular size reference source. CubeSatsoffer inexpensive, frequent access to space at a fraction of the cost of traditional systems, and are now being deployed to geostationary orbits and on interplanetary trajectories. The fundamental CubeSat bus unit of 10 cm cubed can be combined in multiple units and offers a common form factor allowing for easy integration as secondary payloads on traditional launches and rapid testing of new technologies on-orbit. We describe a 6U CubeSat SGS measuring 10 cm x 20 cm x

  18. Advanced Photon Counting Imaging Detectors with 100ps Timing for Astronomical and Space Sensing Applications

    Science.gov (United States)

    Siegmund, O.; Vallerga, J.; Welsh, B.; Rabin, M.; Bloch, J.

    In recent years EAG has implemented a variety of high-resolution, large format, photon-counting MCP detectors in space instrumentation for satellite FUSE, GALEX, IMAGE, SOHO, HST-COS, rocket, and shuttle payloads. Our scheme of choice has been delay line readouts encoding photon event position centroids, by determination of the difference in arrival time of the event charge at the two ends of a distributed resistive-capacitive (RC) delay line. Our most commonly used delay line configuration is the cross delay line (XDL). In its simplest form the delay-line encoding electronics consists of a fast amplifier for each end of the delay line, followed by time-to-digital converters (TDC's). We have achieved resolutions of Pulsar with a telescope as small as 1m. Although microchannel plate delay line detectors meet many of the imaging and timing demands of various applications, they have limitations. The relatively high gain (107) reduces lifetime and local counting rate, and the fixed delay (10's of ns) makes multiple simultaneous event recording problematic. To overcome these limitations we have begun development of cross strip readout anodes for microchannel plate detectors. The cross strip (XS) anode is a coarse (~0.5 mm) multi-layer metal and ceramic pattern of crossed fingers on an alumina substrate. The charge cloud is matched to the anode period so that it is collected on several neighboring fingers to ensure an accurate event charge centroid can be determined. Each finger of the anode is connected to a low noise charge sensitive amplifier and followed by subsequent A/D conversion of individual strip charge values and a hardware centroid determination of better than 1/100 of a strip are possible. Recently we have commissioned a full 32 x 32 mm XS open face laboratory detector and demonstrated excellent resolution (Los Alamos National Laboratory, NASA and NSF we are developing high rate (>107 Hz) XS encoding electronics that will encode temporally simultaneous

  19. Trends in medical image processing

    International Nuclear Information System (INIS)

    Robilotta, C.C.

    1987-01-01

    The function of medical image processing is analysed, mentioning the developments, the physical agents, and the main categories, as conection of distortion in image formation, detectability increase, parameters quantification, etc. (C.G.C.) [pt

  20. Methods of digital image processing

    International Nuclear Information System (INIS)

    Doeler, W.

    1985-01-01

    Increasing use of computerized methods for diagnostical imaging of radiological problems will open up a wide field of applications for digital image processing. The requirements set by routine diagnostics in medical radiology point to picture data storage and documentation and communication as the main points of interest for application of digital image processing. As to the purely radiological problems, the value of digital image processing is to be sought in the improved interpretability of the image information in those cases where the expert's experience and image interpretation by human visual capacities do not suffice. There are many other domains of imaging in medical physics where digital image processing and evaluation is very useful. The paper reviews the various methods available for a variety of problem solutions, and explains the hardware available for the tasks discussed. (orig.) [de

  1. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    Science.gov (United States)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  2. Astronomical chemistry.

    Science.gov (United States)

    Klemperer, William

    2011-01-01

    The discovery of polar polyatomic molecules in higher-density regions of the interstellar medium by means of their rotational emission detected by radioastronomy has changed our conception of the universe from essentially atomic to highly molecular. We discuss models for molecule formation, emphasizing the general lack of thermodynamic equilibrium. Detailed chemical kinetics is needed to understand molecule formation as well as destruction. Ion molecule reactions appear to be an important class for the generally low temperatures of the interstellar medium. The need for the intrinsically high-quality factor of rotational transitions to definitively pin down molecular emitters has been well established by radioastronomy. The observation of abundant molecular ions both positive and, as recently observed, negative provides benchmarks for chemical kinetic schemes. Of considerable importance in guiding our understanding of astronomical chemistry is the fact that the larger molecules (with more than five atoms) are all organic.

  3. Image processing of globular clusters - Simulation for deconvolution tests (GlencoeSim)

    Science.gov (United States)

    Blazek, Martin; Pata, Petr

    2016-10-01

    This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-Spread Functions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.

  4. Studies in astronomical time series analysis: Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  5. Industrial Applications of Image Processing

    Science.gov (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  6. [Imaging center - optimization of the imaging process].

    Science.gov (United States)

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Building country image process

    Directory of Open Access Journals (Sweden)

    Zubović Jovan

    2005-01-01

    Full Text Available The same branding principles are used for countries as they are used for the products, only the methods are different. Countries are competing among themselves in tourism, foreign investments and exports. Country turnover is at the level that the country's reputation is. The countries that begin as unknown or with a bad image will have limits in operations or they will be marginalized. As a result they will be at the bottom of the international influence scale. On the other hand, countries with a good image, like Germany (despite two world wars will have their products covered with a special "aura".

  8. Image processing applications: From particle physics to society

    International Nuclear Information System (INIS)

    Sotiropoulou, C.-L.; Citraro, S.; Dell'Orso, M.; Luciano, P.; Gkaitatzis, S.; Giannetti, P.

    2017-01-01

    We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.

  9. Digital Data Processing of Images

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  10. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  11. Microprocessor based image processing system

    International Nuclear Information System (INIS)

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  12. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  13. SAADA: Astronomical Databases Made Easier

    Science.gov (United States)

    Michel, L.; Nguyen, H. N.; Motch, C.

    2005-12-01

    Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.

  14. TECHNOLOGIES OF BRAIN IMAGES PROCESSING

    Directory of Open Access Journals (Sweden)

    O.M. Klyuchko

    2017-12-01

    Full Text Available The purpose of present research was to analyze modern methods of processing biological images implemented before storage in databases for biotechnological purposes. The databases further were incorporated into web-based digital systems. Examples of such information systems were described in the work for two levels of biological material organization; databases for storing data of histological analysis and of whole brain were described. Methods of neuroimaging processing for electronic brain atlas were considered. It was shown that certain pathological features can be revealed in histological image processing. Several medical diagnostic techniques (for certain brain pathologies, etc. as well as a few biotechnological methods are based on such effects. Algorithms of image processing were suggested. Electronic brain atlas was conveniently for professionals in different fields described in details. Approaches of brain atlas elaboration, “composite” scheme for large deformations as well as several methods of mathematic images processing were described as well.

  15. Image Processing: Some Challenging Problems

    Science.gov (United States)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  16. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  17. Image restoration and processing methods

    International Nuclear Information System (INIS)

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  18. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... multiple imaging setups. This makes the system well suited for development of new processing methods and for clinical evaluations, where acquisition of the exact same scan location for multiple methods is important. The second project addressed implementation, development and evaluation of SASB using...... methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate...

  19. Invitation to medical image processing

    International Nuclear Information System (INIS)

    Kitasaka, Takayuki; Suenaga, Yasuhito; Mori, Kensaku

    2010-01-01

    This medical essay explains the present state of CT image processing technology about its recognition, acquisition and visualization for computer-assisted diagnosis (CAD) and surgery (CAS), and future view. Medical image processing has a series of history of its original start from the discovery of X-ray to its application to diagnostic radiography, its combination with the computer for CT, multi-detector raw CT, leading to 3D/4D images for CAD and CAS. CAD is performed based on the recognition of normal anatomical structure of human body, detection of possible abnormal lesion and visualization of its numerical figure into image. Actual instances of CAD images are presented here for chest (lung cancer), abdomen (colorectal cancer) and future body atlas (models of organs and diseases for imaging), a recent national project: computer anatomy. CAS involves the surgical planning technology based on 3D images, navigation of the actual procedure and of endoscopy. As guidance to beginning technological image processing, described are the national and international community like related academic societies, regularly conducting congresses, textbooks and workshops, and topics in the field like computed anatomy of an individual patient for CAD and CAS, its data security and standardization. In future, protective medicine is in authors' view based on the imaging technology, e.g., daily life CAD of individuals ultimately, as exemplified in the present body thermometer and home sphygmometer, to monitor one's routine physical conditions. (T.T.)

  20. Astronomical Spectroscopy for Amateurs

    CERN Document Server

    Harrison, Ken M

    2011-01-01

    Astronomical Spectroscopy for Amateurs is a complete guide for amateur astronomers who are looking for a new challenge beyond astrophotography. The book provides a brief overview of the history and development of the spectroscope, then a short introduction to the theory of stellar spectra, including details on the necessary reference spectra required for instrument testing and spectral comparison. The various types of spectroscopes available to the amateur are then described. Later sections cover all aspects of setting up and using various types of commercially available and home-built spectroscopes, starting with basic transmission gratings and going through more complex models, all the way to the sophisticated Littrow design. The final part of the text is about practical spectroscope design and construction. This book uniquely brings together a collection of observing, analyzing, and processing hints and tips that will allow the amateur to build skills in preparing scientifically acceptable spectra data. It...

  1. Choosing and using astronomical filters

    CERN Document Server

    Griffiths, Martin

    2014-01-01

    As a casual read through any of the major amateur astronomical magazines will demonstrate, there are filters available for all aspects of optical astronomy. This book provides a ready resource on the use of the following filters, among others, for observational astronomy or for imaging: Light pollution filters Planetary filters Solar filters Neutral density filters for Moon observation Deep-sky filters, for such objects as galaxies, nebulae and more Deep-sky objects can be imaged in much greater detail than was possible many years ago. Amateur astronomers can take

  2. Adapting astronomical source detection software to help detect animals in thermal images obtained by unmanned aerial systems

    Science.gov (United States)

    Longmore, S. N.; Collins, R. P.; Pfeifer, S.; Fox, S. E.; Mulero-Pazmany, M.; Bezombes, F.; Goodwind, A.; de Juan Ovelar, M.; Knapen, J. H.; Wich, S. A.

    2017-02-01

    In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world.

  3. Some computer applications and digital image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Lowinger, T.

    1981-01-01

    Methods of digital image processing are applied to problems in nuclear medicine imaging. The symmetry properties of central nervous system lesions are exploited in an attempt to determine the three-dimensional radioisotope density distribution within the lesions. An algorithm developed by astronomers at the end of the 19th century to determine the distribution of matter in globular clusters is applied to tumors. This algorithm permits the emission-computed-tomographic reconstruction of spherical lesions from a single view. The three-dimensional radioisotope distribution derived by the application of the algorithm can be used to characterize the lesions. The applicability to nuclear medicine images of ten edge detection methods in general usage in digital image processing were evaluated. A general model of image formation by scintillation cameras is developed. The model assumes that objects to be imaged are composed of a finite set of points. The validity of the model has been verified by its ability to duplicate experimental results. Practical applications of this work involve quantitative assessment of the distribution of radipharmaceuticals under clinical situations and the study of image processing algorithms

  4. Differential morphology and image processing.

    Science.gov (United States)

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  5. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  6. Digital processing of radiographic images

    Science.gov (United States)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  7. Crack detection using image processing

    International Nuclear Information System (INIS)

    Moustafa, M.A.A

    2010-01-01

    This thesis contains five main subjects in eight chapters and two appendices. The first subject discus Wiener filter for filtering images. In the second subject, we examine using different methods, as Steepest Descent Algorithm (SDA) and the Wavelet Transformation, to detect and filling the cracks, and it's applications in different areas as Nano technology and Bio-technology. In third subject, we attempt to find 3-D images from 1-D or 2-D images using texture mapping with Open Gl under Visual C ++ language programming. The fourth subject consists of the process of using the image warping methods for finding the depth of 2-D images using affine transformation, bilinear transformation, projective mapping, Mosaic warping and similarity transformation. More details about this subject will be discussed below. The fifth subject, the Bezier curves and surface, will be discussed in details. The methods for creating Bezier curves and surface with unknown distribution, using only control points. At the end of our discussion we will obtain the solid form, using the so called NURBS (Non-Uniform Rational B-Spline); which depends on: the degree of freedom, control points, knots, and an evaluation rule; and is defined as a mathematical representation of 3-D geometry that can accurately describe any shape from a simple 2-D line, circle, arc, or curve to the most complex 3-D organic free-form surface or (solid) which depends on finding the Bezier curve and creating family of curves (surface), then filling in between to obtain the solid form. Another representation for this subject is concerned with building 3D geometric models from physical objects using image-based techniques. The advantage of image techniques is that they require no expensive equipment; we use NURBS, subdivision surface and mesh for finding the depth of any image with one still view or 2D image. The quality of filtering depends on the way the data is incorporated into the model. The data should be treated with

  8. The New Amateur Astronomer

    Science.gov (United States)

    Mobberley, Martin

    Amateur astronomy has changed beyond recognition in less than two decades. The reason is, of course, technology. Affordable high-quality telescopes, computer-controlled 'go to' mountings, autoguiders, CCD cameras, video, and (as always) computers and the Internet, are just a few of the advances that have revolutionized astronomy for the twenty-first century. Martin Mobberley first looks at the basics before going into an in-depth study of what’s available commercially. He then moves on to the revolutionary possibilities that are open to amateurs, from imaging, through spectroscopy and photometry, to patrolling for near-earth objects - the search for comets and asteroids that may come close to, or even hit, the earth. The New Amateur Astronomer is a road map of the new astronomy, equally suitable for newcomers who want an introduction, or old hands who need to keep abreast of innovations. From the reviews: "This is one of several dozen books in Patrick Moore's "Practical Astronomy" series. Amid this large family, Mobberley finds his niche: the beginning high-tech amateur. The book's first half discusses equipment: computer-driven telescopes, CCD cameras, imaging processing software, etc. This market is changing every bit as rapidly as the computer world, so these details will be current for only a year or two. The rest of the book offers an overview of scientific projects that serious amateurs are carrying out these days. Throughout, basic formulas and technical terms are provided as needed, without formal derivations. An appendix with useful references and Web sites is also included. Readers will need more than this book if they are considering a plunge into high-tech amateur astronomy, but it certainly will whet their appetites. Mobberley's most valuable advice will save the book's owner many times its cover price: buy a quality telescope from a reputable dealer and install it in a simple shelter so it can be used with as little set-up time as possible. A poor

  9. Astronomy Legacy Project - Pisgah Astronomical Research Institute

    Science.gov (United States)

    Barker, Thurburn; Castelaz, Michael W.; Rottler, Lee; Cline, J. Donald

    2016-01-01

    Pisgah Astronomical Research Institute (PARI) is a not-for-profit public foundation in North Carolina dedicated to providing hands-on educational and research opportunities for a broad cross-section of users in science, technology, engineering and math (STEM) disciplines. In November 2007 a Workshop on a National Plan for Preserving Astronomical Photographic Data (2009ASPC,410,33O, Osborn, W. & Robbins, L) was held at PARI. The result was the establishment of the Astronomical Photographic Data Archive (APDA) at PARI. In late 2013 PARI began ALP (Astronomy Legacy Project). ALP's purpose is to digitize an extensive set of twentieth century photographic astronomical data housed in APDA. Because of the wide range of types of plates, plate dimensions and emulsions found among the 40+ collections, plate digitization will require a versatile set of scanners and digitizing instruments. Internet crowdfunding was used to assist in the purchase of additional digitization equipment that were described at AstroPlate2014 Plate Preservation Workshop (www.astroplate.cz) held in Prague, CZ, March, 2014. Equipment purchased included an Epson Expression 11000XL scanner and two Nikon D800E cameras. These digital instruments will compliment a STScI GAMMA scanner now located in APDA. GAMMA will be adapted to use an electroluminescence light source and a digital camera with a telecentric lens to achieve high-speed high-resolution scanning. The 1μm precision XY stage of GAMMA will allow very precise positioning of the plate stage. Multiple overlapping CCD images of small sections of each plate, tiles, will be combined using a photo-mosaic process similar to one used in Harvard's DASCH project. Implementation of a software pipeline for the creation of a SQL database containing plate images and metadata will be based upon APPLAUSE as described by Tuvikene at AstroPlate2014 (www.astroplate.cz/programs/).

  10. Biographical encyclopedia of astronomers

    CERN Document Server

    Trimble, Virginia; Williams, Thomas; Bracher, Katherine; Jarrell, Richard; Marché, Jordan; Palmeri, JoAnn; Green, Daniel

    2014-01-01

    The Biographical Encyclopedia of Astronomers is a unique and valuable resource for historians and astronomers alike. It includes approx. 1850 biographical sketches on astronomers from antiquity to modern times. It is the collective work of 430 authors edited by an editorial board of 8 historians and astronomers. This reference provides biographical information on astronomers and cosmologists by utilizing contemporary historical scholarship. The fully corrected and updated second edition adds approximately 300 biographical sketches. Based on ongoing research and feedback from the community, the new entries will fill gaps and provide expansions. In addition, greater emphasis on Russo phone astronomers and radio astronomers is given. Individual entries vary from 100 to 1500 words, including the likes of the super luminaries such as Newton and Einstein, as well as lesser-known astronomers like Galileo's acolyte, Mario Guiducci.

  11. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  12. Linear Algebra and Image Processing

    Science.gov (United States)

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  13. Mathematical problems in image processing

    International Nuclear Information System (INIS)

    Chidume, C.E.

    2000-01-01

    This is the second volume of a new series of lecture notes of the Abdus Salam International Centre for Theoretical Physics. This volume contains the lecture notes given by A. Chambolle during the School on Mathematical Problems in Image Processing. The school consisted of two weeks of lecture courses and one week of conference

  14. Motion-compensated processing of image signals

    NARCIS (Netherlands)

    2010-01-01

    In a motion-compensated processing of images, input images are down-scaled (scl) to obtain down-scaled images, the down-scaled images are subjected to motion- compensated processing (ME UPC) to obtain motion-compensated images, the motion- compensated images are up-scaled (sc2) to obtain up-scaled

  15. Musashi dynamic image processing system

    International Nuclear Information System (INIS)

    Murata, Yutaka; Mochiki, Koh-ichi; Taguchi, Akira

    1992-01-01

    In order to produce transmitted neutron dynamic images using neutron radiography, a real time system called Musashi dynamic image processing system (MDIPS) was developed to collect, process, display and record image data. The block diagram of the MDIPS is shown. The system consists of a highly sensitive, high resolution TV camera driven by a custom-made scanner, a TV camera deflection controller for optimal scanning, which adjusts to the luminous intensity and the moving speed of an object, a real-time corrector to perform the real time correction of dark current, shading distortion and field intensity fluctuation, a real time filter for increasing the image signal to noise ratio, a video recording unit and a pseudocolor monitor to realize recording in commercially available products and monitoring by means of the CRTs in standard TV scanning, respectively. The TV camera and the TV camera deflection controller utilized for producing still images can be applied to this case. The block diagram of the real-time corrector is shown. Its performance is explained. Linear filters and ranked order filters were developed. (K.I.)

  16. FITSManager: Management of Personal Astronomical Data

    Science.gov (United States)

    Cui, Chenzhou; Fan, Dongwei; Zhao, Yongheng; Kembhavi, Ajit; He, Boliang; Cao, Zihuang; Li, Jian; Nandrekar, Deoyani

    2011-07-01

    With the increase of personal storage capacity, it is easy to find hundreds to thousands of FITS files in the personal computer of an astrophysicist. Because Flexible Image Transport System (FITS) is a professional data format initiated by astronomers and used mainly in the small community, data management toolkits for FITS files are very few. Astronomers need a powerful tool to help them manage their local astronomical data. Although Virtual Observatory (VO) is a network oriented astronomical research environment, its applications and related technologies provide useful solutions to enhance the management and utilization of astronomical data hosted in an astronomer's personal computer. FITSManager is such a tool to provide astronomers an efficient management and utilization of their local data, bringing VO to astronomers in a seamless and transparent way. FITSManager provides fruitful functions for FITS file management, like thumbnail, preview, type dependent icons, header keyword indexing and search, collaborated working with other tools and online services, and so on. The development of the FITSManager is an effort to fill the gap between management and analysis of astronomical data.

  17. using fuzzy logic in image processing

    International Nuclear Information System (INIS)

    Ashabrawy, M.A.F.

    2002-01-01

    due to the unavoidable merge between computer and mathematics, the signal processing in general and the processing in particular have greatly improved and advanced. signal processing deals with the processing of any signal data for use by a computer, while image processing deals with all kinds of images (just images). image processing involves the manipulation of image data for better appearance and viewing by people; consequently, it is a rapidly growing and exciting field to be involved in today . this work takes an applications - oriented approach to image processing .the applications; the maps and documents of the first egyptian research reactor (ETRR-1), the x-ray medical images and the fingerprints image. since filters, generally, work continuous ranges rather than discrete values, fuzzy logic techniques are more convenient.thee techniques are powerful in image processing and can deal with one- dimensional, 1-D and two - dimensional images, 2-D images as well

  18. Image processing with ImageJ

    NARCIS (Netherlands)

    Abramoff, M.D.; Magalhães, Paulo J.; Ram, Sunanda J.

    2004-01-01

    Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS).

  19. Image quality dependence on image processing software in ...

    African Journals Online (AJOL)

    Image quality dependence on image processing software in computed radiography. ... Agfa CR readers use MUSICA software, and an upgrade with significantly different image ... Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  20. Fast processing of foreign fiber images by image blocking

    OpenAIRE

    Yutao Wu; Daoliang Li; Zhenbo Li; Wenzhu Yang

    2014-01-01

    In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extra...

  1. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu

    2014-08-01

    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  2. Biomedical signal and image processing.

    Science.gov (United States)

    Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro

    2011-01-01

    Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly [1], [2]. This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.

  3. Review of Biomedical Image Processing

    Directory of Open Access Journals (Sweden)

    Ciaccio Edward J

    2011-11-01

    Full Text Available Abstract This article is a review of the book: 'Biomedical Image Processing', by Thomas M. Deserno, which is published by Springer-Verlag. Salient information that will be useful to decide whether the book is relevant to topics of interest to the reader, and whether it might be suitable as a course textbook, are presented in the review. This includes information about the book details, a summary, the suitability of the text in course and research work, the framework of the book, its specific content, and conclusions.

  4. Image processing with personal computer

    International Nuclear Information System (INIS)

    Hara, Hiroshi; Handa, Madoka; Watanabe, Yoshihiko

    1990-01-01

    The method of automating the judgement works using photographs in radiation nondestructive inspection with a simple type image processor on the market was examined. The software for defect extraction and making binary and the software for automatic judgement were made for trial, and by using the various photographs on which the judgement was already done as the object, the accuracy and the problematic points were tested. According to the state of the objects to be photographed and the condition of inspection, the accuracy of judgement from 100% to 45% was obtained. The criteria for judgement were in conformity with the collection of reference photographs made by Japan Cast Steel Association. In the non-destructive inspection by radiography, the number and size of the defect images in photographs are visually judged, the results are collated with the standard, and the quality is decided. Recently, the technology of image processing with personal computers advanced, therefore by utilizing this technology, the automation of the judgement of photographs was attempted to improve the accuracy, to increase the inspection efficiency and to realize labor saving. (K.I.)

  5. Infrared thermography quantitative image processing

    Science.gov (United States)

    Skouroliakou, A.; Kalatzis, I.; Kalyvas, N.; Grivas, TB

    2017-11-01

    Infrared thermography is an imaging technique that has the ability to provide a map of temperature distribution of an object’s surface. It is considered for a wide range of applications in medicine as well as in non-destructive testing procedures. One of its promising medical applications is in orthopaedics and diseases of the musculoskeletal system where temperature distribution of the body’s surface can contribute to the diagnosis and follow up of certain disorders. Although the thermographic image can give a fairly good visual estimation of distribution homogeneity and temperature pattern differences between two symmetric body parts, it is important to extract a quantitative measurement characterising temperature. Certain approaches use temperature of enantiomorphic anatomical points, or parameters extracted from a Region of Interest (ROI). A number of indices have been developed by researchers to that end. In this study a quantitative approach in thermographic image processing is attempted based on extracting different indices for symmetric ROIs on thermograms of the lower back area of scoliotic patients. The indices are based on first order statistical parameters describing temperature distribution. Analysis and comparison of these indices result in evaluating the temperature distribution pattern of the back trunk expected in healthy, regarding spinal problems, subjects.

  6. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  7. Radiology image orientation processing for workstation display

    Science.gov (United States)

    Chang, Chung-Fu; Hu, Kermit; Wilson, Dennis L.

    1998-06-01

    Radiology images are acquired electronically using phosphor plates that are read in Computed Radiology (CR) readers. An automated radiology image orientation processor (RIOP) for determining the orientation for chest images and for abdomen images has been devised. In addition, the chest images are differentiated as front (AP or PA) or side (Lateral). Using the processing scheme outlined, hospitals will improve the efficiency of quality assurance (QA) technicians who orient images and prepare the images for presentation to the radiologists.

  8. Digital Data Processing of Images

    African Journals Online (AJOL)

    be concerned with the image enhancement of scintigrams. Two applications of image ... obtained from scintigraphic equipment, image enhance- ment by computer was ... used as an example. ..... Using video-tape display, areas of interest are ...

  9. The Practical Astronomer

    Science.gov (United States)

    Koester, Jack

    "The Practical Astronomer" by Thomas Dick, LLD, E.C. & J. Biddle, Philadelphia, 1849, is reviewed. Information on telescope makers and astronomers can be found. Mentioned are: Fraunhofer; John Herschel; Lawson; Dollond; Tulley; W. & S. Jones; and S.W. Burnham.

  10. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  11. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    To demonstrate the importance of the image processing of fingerprint images prior to image enrolment or comparison, the set of fingerprint images in databases (a) and (b) of the FVC (Fingerprint Verification Competition) 2000 database were analyzed using a features extraction algorithm. This paper presents the results of ...

  12. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    Science.gov (United States)

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  13. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    Science.gov (United States)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  14. Scilab and SIP for Image Processing

    OpenAIRE

    Fabbri, Ricardo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2012-01-01

    This paper is an overview of Image Processing and Analysis using Scilab, a free prototyping environment for numerical calculations similar to Matlab. We demonstrate the capabilities of SIP -- the Scilab Image Processing Toolbox -- which extends Scilab with many functions to read and write images in over 100 major file formats, including PNG, JPEG, BMP, and TIFF. It also provides routines for image filtering, edge detection, blurring, segmentation, shape analysis, and image recognition. Basic ...

  15. Image processing technology for nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Beom; Kim, Woong Ki; Park, Soon Young

    1993-05-01

    Digital image processing technique is being actively studied since microprocessors and semiconductor memory devices have been developed in 1960's. Now image processing board for personal computer as well as image processing system for workstation is developed and widely applied to medical science, military, remote inspection, and nuclear industry. Image processing technology which provides computer system with vision ability not only recognizes nonobvious information but processes large information and therefore this technique is applied to various fields like remote measurement, object recognition and decision in adverse environment, and analysis of X-ray penetration image in nuclear facilities. In this report, various applications of image processing to nuclear facilities are examined, and image processing techniques are also analysed with the view of proposing the ideas for future applications. (Author)

  16. Nuclear medicine imaging and data processing

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1978-01-01

    The Oak Ridge Imaging System (ORIS) is a software operating system structure around the Digital Equipment Corporation's PDP-8 minicomputer which provides a complete range of image manipulation procedures. Through its modular design it remains open-ended for easy expansion to meet future needs. Already included in the system are image access routines for use with the rectilinear scanner or gamma camera (both static and flow studies); display hardware design and corresponding software; archival storage provisions; and, most important, many image processing techniques. The image processing capabilities include image defect removal, smoothing, nonlinear bounding, preparation of functional images, and transaxial emission tomography reconstruction from a limited number of views

  17. Eliminating "Hotspots" in Digital Image Processing

    Science.gov (United States)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  18. Image processing unit with fall-back.

    NARCIS (Netherlands)

    2011-01-01

    An image processing unit ( 100,200,300 ) for computing a sequence of output images on basis of a sequence of input images, comprises: a motion estimation unit ( 102 ) for computing a motion vector field on basis of the input images; a quality measurement unit ( 104 ) for computing a value of a

  19. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  20. Optoelectronic imaging of speckle using image processing method

    Science.gov (United States)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  1. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika

    2009-01-01

    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  2. Digital image processing techniques in archaeology

    Digital Repository Service at National Institute of Oceanography (India)

    Santanam, K.; Vaithiyanathan, R.; Tripati, S.

    Digital image processing involves the manipulation and interpretation of digital images with the aid of a computer. This form of remote sensing actually began in the 1960's with a limited number of researchers analysing multispectral scanner data...

  3. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  4. Enhancement of image contrast in linacgram through image processing

    International Nuclear Information System (INIS)

    Suh, Hyun Suk; Shin, Hyun Kyo; Lee, Re Na

    2000-01-01

    Conventional radiation therapy portal images gives low contrast images. The purpose of this study was to enhance image contrast of a linacgram by developing a low--cost image processing method. Chest linacgram was obtained by irradiating humanoid phantom and scanned using Diagnostic-Pro scanner for image processing. Several types of scan method were used in scanning. These include optical density scan, histogram equalized scan, linear histogram based scan, linear histogram independent scan, linear optical density scan, logarithmic scan, and power square root scan. The histogram distribution of the scanned images were plotted and the ranges of the gray scale were compared among various scan types. The scanned images were then transformed to the gray window by pallette fitting method and the contrast of the reprocessed portal images were evaluated for image improvement. Portal images of patients were also taken at various anatomic sites and the images were processed by Gray Scale Expansion (GSE) method. The patient images were analyzed to examine the feasibility of using the GSE technique in clinic. The histogram distribution showed that minimum and maximum gray scale ranges of 3192 and 21940 were obtained when the image was scanned using logarithmic method and square root method, respectively. Out of 256 gray scale, only 7 to 30% of the steps were used. After expanding the gray scale to full range, contrast of the portal images were improved. Experiment performed with patient image showed that improved identification of organs were achieved by GSE in portal images of knee joint, head and neck, lung, and pelvis. Phantom study demonstrated that the GSE technique improved image contrast of a linacgram. This indicates that the decrease in image quality resulting from the dual exposure, could be improved by expanding the gray scale. As a result, the improved technique will make it possible to compare the digitally reconstructed radiographs (DRR) and simulation image for

  5. An astronomical murder?

    Science.gov (United States)

    Belenkiy, Ari

    2010-04-01

    Ari Belenkiy examines the murder of Hypatia of Alexandria, wondering whether problems with astronomical observations and the date of Easter led to her becoming a casualty of fifth-century political intrigue.

  6. The amateur astronomer

    CERN Document Server

    Moore, Patrick

    2006-01-01

    Introduces astronomy and amateur observing together. This edition includes photographs and illustrations. The comprehensive appendices provide hints and tips, as well as data for every aspect of amateur astronomy. This work is useful for amateur astronomers

  7. Image processing for medical diagnosis using CNN

    International Nuclear Information System (INIS)

    Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

    2003-01-01

    Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images

  8. Image processing in diabetic related causes

    CERN Document Server

    Kumar, Amit

    2016-01-01

    This book is a collection of all the experimental results and analysis carried out on medical images of diabetic related causes. The experimental investigations have been carried out on images starting from very basic image processing techniques such as image enhancement to sophisticated image segmentation methods. This book is intended to create an awareness on diabetes and its related causes and image processing methods used to detect and forecast in a very simple way. This book is useful to researchers, Engineers, Medical Doctors and Bioinformatics researchers.

  9. Astronomers Unveiling Life's Cosmic Origins

    Science.gov (United States)

    2009-02-01

    Processes that laid the foundation for life on Earth -- star and planet formation and the production of complex organic molecules in interstellar space -- are yielding their secrets to astronomers armed with powerful new research tools, and even better tools soon will be available. Astronomers described three important developments at a symposium on the "Cosmic Cradle of Life" at the annual meeting of the American Association for the Advancement of Science in Chicago, IL. Chemistry Cycle The Cosmic Chemistry Cycle CREDIT: Bill Saxton, NRAO/AUI/NSF Full Size Image Files Chemical Cycle Graphic (above image, JPEG, 129K) Graphic With Text Blocks (JPEG, 165K) High-Res TIFF (44.2M) High-Res TIFF With Text Blocks (44.2M) In one development, a team of astrochemists released a major new resource for seeking complex interstellar molecules that are the precursors to life. The chemical data released by Anthony Remijan of the National Radio Astronomy Observatory (NRAO) and his university colleagues is part of the Prebiotic Interstellar Molecule Survey, or PRIMOS, a project studying a star-forming region near the center of our Milky Way Galaxy. PRIMOS is an effort of the National Science Foundation's Center for Chemistry of the Universe, started at the University of Virginia (UVa) in October 2008, and led by UVa Professor Brooks H. Pate. The data, produced by the NSF's Robert C. Byrd Green Bank Telescope (GBT) in West Virginia, came from more than 45 individual observations totalling more than nine GigaBytes of data and over 1.4 million individual frequency channels. Scientists can search the GBT data for specific radio frequencies, called spectral lines -- telltale "fingerprints" -- naturally emitted by molecules in interstellar space. "We've identified more than 720 spectral lines in this collection, and about 240 of those are from unknown molecules," Remijan said. He added, "We're making available to all scientists the best collection of data below 50 GHz ever produced for

  10. Organization of bubble chamber image processing

    International Nuclear Information System (INIS)

    Gritsaenko, I.A.; Petrovykh, L.P.; Petrovykh, Yu.L.; Fenyuk, A.B.

    1985-01-01

    A programme of bubble chamber image processing is described. The programme is written in FORTRAN, it is developed for the DEC-10 computer and is designed for operation of semi-automation processing-measurement projects PUOS-2 and PUOS-4. Fornalization of the image processing permits to use it for different physical experiments

  11. The Astro-WISE approach to quality control for astronomical data

    NARCIS (Netherlands)

    Mc Farland, John; Helmich, Ewout M.; Valentijn, Edwin A.

    We present a novel approach to quality control during the processing of astronomical data. Quality control in the Astro-WISE Information System is integral to all aspects of data handing and provides transparent access to quality estimators for all stages of data reduction from the raw image to the

  12. An Applied Image Processing for Radiographic Testing

    International Nuclear Information System (INIS)

    Ratchason, Surasak; Tuammee, Sopida; Srisroal Anusara

    2005-10-01

    An applied image processing for radiographic testing (RT) is desirable because it decreases time-consuming, decreases the cost of inspection process that need the experienced workers, and improves the inspection quality. This paper presents the primary study of image processing for RT-films that is the welding-film. The proposed approach to determine the defects on weld-images. The BMP image-files are opened and developed by computer program that using Borland C ++ . The software has five main methods that are Histogram, Contrast Enhancement, Edge Detection, Image Segmentation and Image Restoration. Each the main method has the several sub method that are the selected options. The results showed that the effective software can detect defects and the varied method suit for the different radiographic images. Furthermore, improving images are better when two methods are incorporated

  13. Image processing on the image with pixel noise bits removed

    Science.gov (United States)

    Chuang, Keh-Shih; Wu, Christine

    1992-06-01

    Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.

  14. Matching rendered and real world images by digital image processing

    Science.gov (United States)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  15. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang

    2014-01-01

    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  16. Astronomical Instrumentation Systems Quality Management Planning: AISQMP (Abstract)

    Science.gov (United States)

    Goldbaum, J.

    2017-12-01

    (Abstract only) The capability of small aperture astronomical instrumentation systems (AIS) to make meaningful scientific contributions has never been better. The purpose of AIS quality management planning (AISQMP) is to ensure the quality of these contributions such that they are both valid and reliable. The first step involved with AISQMP is to specify objective quality measures not just for the AIS final product, but also for the instrumentation used in its production. The next step is to set up a process to track these measures and control for any unwanted variation. The final step is continual effort applied to reducing variation and obtaining measured values near optimal theoretical performance. This paper provides an overview of AISQMP while focusing on objective quality measures applied to astronomical imaging systems.

  17. Image processing in nondestructive testing

    International Nuclear Information System (INIS)

    Janney, D.H.

    1976-01-01

    In those applications where the principal desire is for higher throughput, the problem often becomes one of automatic feature extraction and mensuration. Classically these problems can be approached by means of either an optical image processor or an analysis in the digital computer. Optical methods have the advantages of low cost and very high speed, but are often inflexible and are sometimes very difficult to implement due to practical problems. Computerized methods can be very flexible, they can use very powerful mathematical techniques, but usually are difficult to implement for very high throughput. Recent technological developments in microprocessors and in electronic analog image analyzers may furnish the key to resolving the shortcomings of the two classical methods of image analysis

  18. How Digital Image Processing Became Really Easy

    Science.gov (United States)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  19. Quantitative image processing in fluid mechanics

    Science.gov (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  20. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance... and spatial co-ordinates into discrete components. The mathematical concepts involved are the sampling and transform theory. Two dimensional transforms are used for image enhancement, restoration, encoding and description too. The main objective of the image...

  1. Intelligent medical image processing by simulated annealing

    International Nuclear Information System (INIS)

    Ohyama, Nagaaki

    1992-01-01

    Image processing is being widely used in the medical field and already has become very important, especially when used for image reconstruction purposes. In this paper, it is shown that image processing can be classified into 4 categories; passive, active, intelligent and visual image processing. These 4 classes are explained at first through the use of several examples. The results show that the passive image processing does not give better results than the others. Intelligent image processing, then, is addressed, and the simulated annealing method is introduced. Due to the flexibility of the simulated annealing, formulated intelligence is shown to be easily introduced in an image reconstruction problem. As a practical example, 3D blood vessel reconstruction from a small number of projections, which is insufficient for conventional method to give good reconstruction, is proposed, and computer simulation clearly shows the effectiveness of simulated annealing method. Prior to the conclusion, medical file systems such as IS and C (Image Save and Carry) is pointed out to have potential for formulating knowledge, which is indispensable for intelligent image processing. This paper concludes by summarizing the advantages of simulated annealing. (author)

  2. Atlas of Astronomical Discoveries

    CERN Document Server

    Schilling, Govert

    2011-01-01

    Four hundred years ago in Middelburg, in the Netherlands, the telescope was invented. The invention unleashed a revolution in the exploration of the universe. Galileo Galilei discovered mountains on the Moon, spots on the Sun, and moons around Jupiter. Christiaan Huygens saw details on Mars and rings around Saturn. William Herschel discovered a new planet and mapped binary stars and nebulae. Other astronomers determined the distances to stars, unraveled the structure of the Milky Way, and discovered the expansion of the universe. And, as telescopes became bigger and more powerful, astronomers delved deeper into the mysteries of the cosmos. In his Atlas of Astronomical Discoveries, astronomy journalist Govert Schilling tells the story of 400 years of telescopic astronomy. He looks at the 100 most important discoveries since the invention of the telescope. In his direct and accessible style, the author takes his readers on an exciting journey encompassing the highlights of four centuries of astronomy. Spectacul...

  3. The Red Rectangle: An Astronomical Example of Mach Bands?

    Science.gov (United States)

    Brecher, K.

    2005-12-01

    Recently, the Hubble Space Telescope (HST) produced spectacular images of the "Red Rectangle". This appears to be a binary star system undergoing recurrent mass loss episodes. The image-processed HST photographs display distinctive diagonal lightness enhancements. Some of the visual appearance undoubtedly arises from actual variations in the luminosity distribution of the light of the nebula itself, i.e., due to limb brightening. Psychophysical enhancement similar to the Vasarely or pyramid effect also seems to be involved in the visual impression conveyed by the HST images. This effect is related to Mach bands (as well as to the Chevreul and Craik-O'Brien-Cornsweet effects). The effect can be produced by stacking concentric squares (or other geometrical figures such as rectangles or hexagons) of linearly increasing or decreasing size and lightness, one on top of another. We have constructed controllable Flash applets of this effect as part of the NSF supported "Project LITE: Light Inquiry Through Experiments". They can be found in the vision section of the LITE web site at http://lite.bu.edu. Mach band effects have previously been seen in medical x-ray images. Here we report for the first time the possibility that such effects play a role in the interpretation of astronomical images. Specifically, we examine to what extent the visual impressions of the Red Rectangle and other extended astronomical objects are purely physical (photometric) in origin and to what degree they are enhanced by psychophysical processes. To help assess the relative physical and psychophysical contributions to the perceived lightness effects, we have made use of a center-surround (Difference of Gaussians) filter we developed for MatLab. We conclude that local (lateral inhibition) and longer range human visual perception effects probably do contribute to the lightness features seen in astronomical objects like the Red Rectangle. Project LITE is supported by NSF Grant # DUE-0125992.

  4. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  5. Future-oriented maintenance strategy based on automated processes is finding its way into large astronomical facilities at remote observing sites

    Science.gov (United States)

    Silber, Armin; Gonzalez, Christian; Pino, Francisco; Escarate, Patricio; Gairing, Stefan

    2014-08-01

    With expanding sizes and increasing complexity of large astronomical observatories on remote observing sites, the call for an efficient and recourses saving maintenance concept becomes louder. The increasing number of subsystems on telescopes and instruments forces large observatories, like in industries, to rethink conventional maintenance strategies for reaching this demanding goal. The implementation of full-, or semi-automatic processes for standard service activities can help to keep the number of operating staff on an efficient level and to reduce significantly the consumption of valuable consumables or equipment. In this contribution we will demonstrate on the example of the 80 Cryogenic subsystems of the ALMA Front End instrument, how an implemented automatic service process increases the availability of spare parts and Line Replaceable Units. Furthermore how valuable staff recourses can be freed from continuous repetitive maintenance activities, to allow focusing more on system diagnostic tasks, troubleshooting and the interchanging of line replaceable units. The required service activities are decoupled from the day-to-day work, eliminating dependencies on workload peaks or logistic constrains. The automatic refurbishing processes running in parallel to the operational tasks with constant quality and without compromising the performance of the serviced system components. Consequentially that results in an efficiency increase, less down time and keeps the observing schedule on track. Automatic service processes in combination with proactive maintenance concepts are providing the necessary flexibility for the complex operational work structures of large observatories. The gained planning flexibility is allowing an optimization of operational procedures and sequences by considering the required cost efficiency.

  6. Stable image acquisition for mobile image processing applications

    Science.gov (United States)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  7. Cellular automata in image processing and geometry

    CERN Document Server

    Adamatzky, Andrew; Sun, Xianfang

    2014-01-01

    The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...

  8. ARTIP: Automated Radio Telescope Image Processing Pipeline

    Science.gov (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  9. On some applications of diffusion processes for image processing

    International Nuclear Information System (INIS)

    Morfu, S.

    2009-01-01

    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  10. Process perspective on image quality evaluation

    Science.gov (United States)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  11. Multispectral image enhancement processing for microsat-borne imager

    Science.gov (United States)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  12. ASURV: Astronomical SURVival Statistics

    Science.gov (United States)

    Feigelson, E. D.; Nelson, P. I.; Isobe, T.; LaValley, M.

    2014-06-01

    ASURV (Astronomical SURVival Statistics) provides astronomy survival analysis for right- and left-censored data including the maximum-likelihood Kaplan-Meier estimator and several univariate two-sample tests, bivariate correlation measures, and linear regressions. ASURV is written in FORTRAN 77, and is stand-alone and does not call any specialized libraries.

  13. Korean Astronomical Calendar, Chiljeongsan

    Science.gov (United States)

    Lee, Eun Hee

    In fifteenth century Korea, there was a grand project for the astronomical calendar and instrument making by the order of King Sejong 世宗 (1418-1450). During this period, many astronomical and calendrical books including Islamic sources in Chinese versions were imported from Ming 明 China, and corrected and researched by the court astronomers of Joseon 朝鮮 (1392-1910). Moreover, the astronomers and technicians of Korea frequently visited China to study astronomy and instrument making, and they brought back useful information in the form of new published books or specifications of instruments. As a result, a royal observatory equipped with 15 types of instrument was completed in 1438. Two types of calendar, Chiljeongsan Naepyeon 七政算內篇 and Chiljeongsan Oepyeon 七政算外篇, based on the Chinese and Islamic calendar systems, respectively, were published in 1444 with a number of calendrical editions such as corrections and example supplements (假令) including calculation methods and results for solar and lunar eclipses.

  14. Imaging process and VIP engagement

    Directory of Open Access Journals (Sweden)

    Starčević Slađana

    2007-01-01

    Full Text Available It's often quoted that celebrity endorsement advertising has been recognized as "an ubiquitous feature of the modern marketing". The researches have shown that this kind of engagement has been producing significantly more favorable reactions of consumers, that is, a higher level of an attention for the advertising messages, a better recall of the message and a brand name, more favorable evaluation and purchasing intentions of the brand, in regard to engagement of the non-celebrity endorsers. A positive influence on a firm's profitability and prices of stocks has also been shown. Therefore marketers leaded by the belief that celebrities represent the effective ambassadors in building of positive brand image or company image and influence an improvement of the competitive position, invest enormous amounts of money for signing the contracts with them. However, this strategy doesn't guarantee success in any case, because it's necessary to take into account many factors. This paper summarizes the results of previous researches in this field and also the recommendations for a more effective use of this kind of advertising.

  15. Design for embedded image processing on FPGAs

    CERN Document Server

    Bailey, Donald G

    2011-01-01

    "Introductory material will consider the problem of embedded image processing, and how some of the issues may be solved using parallel hardware solutions. Field programmable gate arrays (FPGAs) are introduced as a technology that provides flexible, fine-grained hardware that can readily exploit parallelism within many image processing algorithms. A brief review of FPGA programming languages provides the link between a software mindset normally associated with image processing algorithms, and the hardware mindset required for efficient utilization of a parallel hardware design. The bulk of the book will focus on the design process, and in particular how designing an FPGA implementation differs from a conventional software implementation. Particular attention is given to the techniques for mapping an algorithm onto an FPGA implementation, considering timing, memory bandwidth and resource constraints, and efficient hardware computational techniques. Extensive coverage will be given of a range of image processing...

  16. Crack Length Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    1990-01-01

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better then that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  17. Crack Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal, Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better than that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  18. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  19. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  20. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  1. MR imaging of abnormal synovial processes

    International Nuclear Information System (INIS)

    Quinn, S.F.; Sanchez, R.; Murray, W.T.; Silbiger, M.L.; Ogden, J.; Cochran, C.

    1987-01-01

    MR imaging can directly image abnormal synovium. The authors reviewed over 50 cases with abnormal synovial processes. The abnormalities include Baker cysts, semimembranous bursitis, chronic shoulder bursitis, peroneal tendon ganglion cyst, periarticular abscesses, thickened synovium from rheumatoid and septic arthritis, and synovial hypertrophy secondary to Legg-Calve-Perthes disease. MR imaging has proved invaluable in identifying abnormal synovium, defining the extent and, to a limited degree, characterizing its makeup

  2. Earth Observation Services (Image Processing Software)

    Science.gov (United States)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  3. Morphology and probability in image processing

    International Nuclear Information System (INIS)

    Fabbri, A.G.

    1985-01-01

    The author presents an analysis of some concepts which relate morphological attributes of digital objects to statistically meaningful measures. Some elementary transformations of binary images are described and examples of applications are drawn from the geological and image analysis domains. Some of the morphological models applicablle in astronomy are discussed. It is shown that the development of new spatially oriented computers leads to more extensive applications of image processing in the geosciences

  4. Image processing with a cellular nonlinear network

    International Nuclear Information System (INIS)

    Morfu, S.

    2005-01-01

    A cellular nonlinear network (CNN) based on uncoupled nonlinear oscillators is proposed for image processing purposes. It is shown theoretically and numerically that the contrast of an image loaded at the nodes of the CNN is strongly enhanced, even if this one is initially weak. An image inversion can be also obtained without reconfiguration of the network whereas a gray levels extraction can be performed with an additional threshold filtering. Lastly, an electronic implementation of this CNN is presented

  5. Powerful Radio Burst Indicates New Astronomical Phenomenon

    Science.gov (United States)

    2007-09-01

    Astronomers studying archival data from an Australian radio telescope have discovered a powerful, short-lived burst of radio waves that they say indicates an entirely new type of astronomical phenomenon. Region of Strong Radio Burst Visible-light (negative greyscale) and radio (contours) image of Small Magellanic Cloud and area where burst originated. CREDIT: Lorimer et al., NRAO/AUI/NSF Click on image for high-resolution file ( 114 KB) "This burst appears to have originated from the distant Universe and may have been produced by an exotic event such as the collision of two neutron stars or the death throes of an evaporating black hole," said Duncan Lorimer, Assistant Professor of Physics at West Virginia University (WVU) and the National Radio Astronomy Observatory (NRAO). The research team led by Lorimer consists of Matthew Bailes of Swinburne University in Australia, Maura McLaughlin of WVU and NRAO, David Narkevic of WVU, and Fronefield Crawford of Franklin and Marshall College in Lancaster, Pennsylvania. The astronomers announced their findings in the September 27 issue of the online journal Science Express. The startling discovery came as WVU undergraduate student David Narkevic re-analyzed data from observations of the Small Magellanic Cloud made by the 210-foot Parkes radio telescope in Australia. The data came from a survey of the Magellanic Clouds that included 480 hours of observations. "This survey had sought to discover new pulsars, and the data already had been searched for the type of pulsating signals they produce," Lorimer said. "We re-examined the data, looking for bursts that, unlike the usual ones from pulsars, are not periodic," he added. The survey had covered the Magellanic Clouds, a pair of small galaxies in orbit around our own Milky Way Galaxy. Some 200,000 light-years from Earth, the Magellanic Clouds are prominent features in the Southern sky. Ironically, the new discovery is not part of these galaxies, but rather is much more distant

  6. A gamma cammera image processing system

    International Nuclear Information System (INIS)

    Chen Weihua; Mei Jufang; Jiang Wenchuan; Guo Zhenxiang

    1987-01-01

    A microcomputer based gamma camera image processing system has been introduced. Comparing with other systems, the feature of this system is that an inexpensive microcomputer has been combined with specially developed hardware, such as, data acquisition controller, data processor and dynamic display controller, ect. Thus the process of picture processing has been speeded up and the function expense ratio of the system raised

  7. Mapping spatial patterns with morphological image processing

    Science.gov (United States)

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham

    2006-01-01

    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  8. Extracting meaning from astronomical telegrams

    Science.gov (United States)

    Graham, Matthew; Conwill, L.; Djorgovski, S. G.; Mahabal, A.; Donalek, C.; Drake, A.

    2011-01-01

    The rapidly emerging field of time domain astronomy is one of the most exciting and vibrant new research frontiers, ranging in scientific scope from studies of the Solar System to extreme relativistic astrophysics and cosmology. It is being enabled by a new generation of large synoptic digital sky surveys - LSST, PanStarrs, CRTS - that cover large areas of sky repeatedly, looking for transient objects and phenomena. One of the biggest challenges facing these is the automated classification of transient events, a process that needs machine-processible astronomical knowledge. Semantic technologies enable the formal representation of concepts and relations within a particular domain. ATELs (http://www.astronomerstelegram.org) are a commonly-used means for reporting and commenting upon new astronomical observations of transient sources (supernovae, stellar outbursts, blazar flares, etc). However, they are loose and unstructured and employ scientific natural language for description: this makes automated processing of them - a necessity within the next decade with petascale data rates - a challenge. Nevertheless they represent a potentially rich corpus of information that could lead to new and valuable insights into transient phenomena. This project lies in the cutting-edge field of astrosemantics, a branch of astroinformatics, which applies semantic technologies to astronomy. The ATELs have been used to develop an appropriate concept scheme - a representation of the information they contain - for transient astronomy using aspects of natural language processing. We demonstrate that it is possible to infer the subject of an ATEL from the vocabulary used and to identify previously unassociated reports.

  9. Visualizing Astronomical Data with Blender

    Science.gov (United States)

    Kent, Brian R.

    2014-01-01

    We present methods for using the 3D graphics program Blender in the visualization of astronomical data. The software's forte for animating 3D data lends itself well to use in astronomy. The Blender graphical user interface and Python scripting capabilities can be utilized in the generation of models for data cubes, catalogs, simulations, and surface maps. We review methods for data import, 2D and 3D voxel texture applications, animations, camera movement, and composite renders. Rendering times can be improved by using graphic processing units (GPUs). A number of examples are shown using the software features most applicable to various kinds of data paradigms in astronomy.

  10. Digital image processing in art conservation

    Czech Academy of Sciences Publication Activity Database

    Zitová, Barbara; Flusser, Jan

    č. 53 (2003), s. 44-45 ISSN 0926-4981 Institutional research plan: CEZ:AV0Z1075907 Keywords : art conservation * digital image processing * change detection Subject RIV: JD - Computer Applications, Robotics

  11. Dictionary of computer vision and image processing

    National Research Council Canada - National Science Library

    Fisher, R. B

    2014-01-01

    ... been identified for inclusion since the current edition was published. Revised to include an additional 1000 new terms to reflect current updates, which includes a significantly increased focus on image processing terms, as well as machine learning terms...

  12. Advanced Secure Optical Image Processing for Communications

    Science.gov (United States)

    Al Falou, Ayman

    2018-04-01

    New image processing tools and data-processing network systems have considerably increased the volume of transmitted information such as 2D and 3D images with high resolution. Thus, more complex networks and long processing times become necessary, and high image quality and transmission speeds are requested for an increasing number of applications. To satisfy these two requests, several either numerical or optical solutions were offered separately. This book explores both alternatives and describes research works that are converging towards optical/numerical hybrid solutions for high volume signal and image processing and transmission. Without being limited to hybrid approaches, the latter are particularly investigated in this book in the purpose of combining the advantages of both techniques. Additionally, pure numerical or optical solutions are also considered since they emphasize the advantages of one of the two approaches separately.

  13. Imaging partons in exclusive scattering processes

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus

    2012-06-15

    The spatial distribution of partons in the proton can be probed in suitable exclusive scattering processes. I report on recent performance estimates for parton imaging at a proposed Electron-Ion Collider.

  14. Fragmentation measurement using image processing

    Directory of Open Access Journals (Sweden)

    Farhang Sereshki

    2016-12-01

    Full Text Available In this research, first of all, the existing problems in fragmentation measurement are reviewed for the sake of its fast and reliable evaluation. Then, the available methods used for evaluation of blast results are mentioned. The produced errors especially in recognizing the rock fragments in computer-aided methods, and also, the importance of determination of their sizes in the image analysis methods are described. After reviewing the previous work done, an algorithm is proposed for the automated determination of rock particles’ boundary in the Matlab software. This method can determinate automatically the particles boundary in the minimum time. The results of proposed method are compared with those of Split Desktop and GoldSize software in two automated and manual states. Comparing the curves extracted from different methods reveals that the proposed approach is accurately applicable in measuring the size distribution of laboratory samples, while the manual determination of boundaries in the conventional software is very time-consuming, and the results of automated netting of fragments are very different with the real value due to the error in separation of the objects.

  15. On Processing Hexagonally Sampled Images

    Science.gov (United States)

    2011-07-01

    A. Approved for public release, distribution unlimited. (96ABW-2011-0325) Neuromorphic Infrared Sensor (NIFS) 31 DISTRIBUTION A. Approved...J ••• • Drawn chip size Focal plane size Focal plane resolution Pixel type Pixel pit ch Post -pixel circuitry Interface Process Chip ...analog out 12-bit command bus in two 6-bit words 8-bit digital out Optional 3 input chip select Optional analog out Alternat ive 12 bit input

  16. Processing of space images and geologic interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Yudin, V S

    1981-01-01

    Using data for standard sections, a correlation was established between natural formations in geologic/geophysical dimensions and the form they take in the imaging. With computer processing, important data can be derived from the image. Use of the above correlations has allowed to make a number of preliminary classifications of tectonic structures, and to determine certain ongoing processes in the given section. The derived data may be used for search of useful minerals.

  17. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo

    2014-07-01

    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  18. Early skin tumor detection from microscopic images through image processing

    International Nuclear Information System (INIS)

    Siddiqi, A.A.; Narejo, G.B.; Khan, A.M.

    2017-01-01

    The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing) allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface) that is generated for the algorithm makes the system user friendly. (author)

  19. Astronomical Signatures of Dark Matter

    Directory of Open Access Journals (Sweden)

    Paul Gorenstein

    2014-01-01

    Full Text Available Several independent astronomical observations in different wavelength bands reveal the existence of much larger quantities of matter than what we would deduce from assuming a solar mass to light ratio. They are very high velocities of individual galaxies within clusters of galaxies, higher than expected rotation rates of stars in the outer regions of galaxies, 21 cm line studies indicative of increasing mass to light ratios with radius in the halos of spiral galaxies, hot gaseous X-ray emitting halos around many elliptical galaxies, and clusters of galaxies requiring a much larger component of unseen mass for the hot gas to be bound. The level of gravitational attraction needed for the spatial distribution of galaxies to evolve from the small perturbations implied by the very slightly anisotropic cosmic microwave background radiation to its current web-like configuration requires much more mass than is observed across the entire electromagnetic spectrum. Distorted shapes of galaxies and other features created by gravitational lensing in the images of many astronomical objects require an amount of dark matter consistent with other estimates. The unambiguous detection of dark matter and more recently evidence for dark energy has positioned astronomy at the frontier of fundamental physics as it was in the 17th century.

  20. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  1. Rotation Covariant Image Processing for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Henrik Skibbe

    2013-01-01

    Full Text Available With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  2. Fingerprint image enhancement by differential hysteresis processing.

    Science.gov (United States)

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  3. Image processing system for flow pattern measurements

    International Nuclear Information System (INIS)

    Ushijima, Satoru; Miyanaga, Yoichi; Takeda, Hirofumi

    1989-01-01

    This paper describes the development and application of an image processing system for measurements of flow patterns occuring in natural circulation water flows. In this method, the motions of particles scattered in the flow are visualized by a laser light slit and they are recorded on normal video tapes. These image data are converted to digital data with an image processor and then transfered to a large computer. The center points and pathlines of the particle images are numerically analized, and velocity vectors are obtained with these results. In this image processing system, velocity vectors in a vertical plane are measured simultaneously, so that the two dimensional behaviors of various eddies, with low velocity and complicated flow patterns usually observed in natural circulation flows, can be determined almost quantitatively. The measured flow patterns, which were obtained from natural circulation flow experiments, agreed with photographs of the particle movements, and the validity of this measuring system was confirmed in this study. (author)

  4. Image processing for HTS SQUID probe microscope

    International Nuclear Information System (INIS)

    Hayashi, T.; Koetitz, R.; Itozaki, H.; Ishikawa, T.; Kawabe, U.

    2005-01-01

    An HTS SQUID probe microscope has been developed using a high-permeability needle to enable high spatial resolution measurement of samples in air even at room temperature. Image processing techniques have also been developed to improve the magnetic field images obtained from the microscope. Artifacts in the data occur due to electromagnetic interference from electric power lines, line drift and flux trapping. The electromagnetic interference could successfully be removed by eliminating the noise peaks from the power spectrum of fast Fourier transforms of line scans of the image. The drift between lines was removed by interpolating the mean field value of each scan line. Artifacts in line scans occurring due to flux trapping or unexpected noise were removed by the detection of a sharp drift and interpolation using the line data of neighboring lines. Highly detailed magnetic field images were obtained from the HTS SQUID probe microscope by the application of these image processing techniques

  5. Brain's tumor image processing using shearlet transform

    Science.gov (United States)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander

    2017-09-01

    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  6. Enthusiastic Little Astronomers

    Science.gov (United States)

    Novak, Ines

    2016-04-01

    Younger primary school students often show great interest in the vast Universe hiding behind the starry night's sky, but don't have a way of learning about it and exploring it in regular classes. Some of them would search children's books, Internet or encyclopedias for information or facts they are interested in, but there are those whose hunger for knowledge would go unfulfilled. Such students were the real initiators of our extracurricular activity called Little Astronomers. With great enthusiasm they would name everything that interests them about the Universe that we live in and I would provide the information in a fun and interactive yet acceptable way for their level of understanding. In our class we learn about Earth and its place in the Solar System, we learn about the planets and other objects of our Solar System and about the Sun itself. We also explore the night sky using programs such as Stellarium, learning to recognize constellations and name them. Most of our activities are done using a PowerPoint presentation, YouTube videos, and Internet simulations followed by some practical work the students do themselves. Because of the lack of available materials and funds, most of materials are hand made by the teacher leading the class. We also use the school's galileoscope as often as possible. Every year the students are given the opportunity to go to an observatory in a town 90 km away so that they could gaze at the sky through the real telescope for the first time. Our goal is to start stepping into the world of astronomy by exploring the secrets of the Universe and understanding the process of rotation and revolution of our planet and its effects on our everyday lives and also to become more aware of our own role in our part of the Universe. The hunger for knowledge and enthusiasm these students have is contagious. They are becoming more aware of their surroundings and also understanding their place in the Universe that helps them remain humble and helps

  7. South African Astronomical Observatory

    International Nuclear Information System (INIS)

    1987-01-01

    Work at the South African Astronomical Observatory (SAAO) in recent years, by both staff and visitors, has made major contributions to the fields of astrophysics and astronomy. During 1986 the SAAO has been involved in studies of the following: galaxies; celestial x-ray sources; magellanic clouds; pulsating variables; galactic structure; binary star phenomena; nebulae and interstellar matter; stellar astrophysics; open clusters; globular clusters, and solar systems

  8. Astronomical Research Using Virtual Observatories

    Directory of Open Access Journals (Sweden)

    M Tanaka

    2010-01-01

    Full Text Available The Virtual Observatory (VO for Astronomy is a framework that empowers astronomical research by providing standard methods to find, access, and utilize astronomical data archives distributed around the world. VO projects in the world have been strenuously developing VO software tools and/or portal systems. Interoperability among VO projects has been achieved with the VO standard protocols defined by the International Virtual Observatory Alliance (IVOA. As a result, VO technologies are now used in obtaining astronomical research results from a huge amount of data. We describe typical examples of astronomical research enabled by the astronomical VO, and describe how the VO technologies are used in the research.

  9. Architecture Of High Speed Image Processing System

    Science.gov (United States)

    Konishi, Toshio; Hayashi, Hiroshi; Ohki, Tohru

    1988-01-01

    One of architectures for a high speed image processing system which corresponds to a new algorithm for a shape understanding is proposed. And the hardware system which is based on the archtecture was developed. Consideration points of the architecture are mainly that using processors should match with the processing sequence of the target image and that the developed system should be used practically in an industry. As the result, it was possible to perform each processing at a speed of 80 nano-seconds a pixel.

  10. Old Star's "Rebirth" Gives Astronomers Surprises

    Science.gov (United States)

    2005-04-01

    Astronomers using the National Science Foundation's Very Large Array (VLA) radio telescope are taking advantage of a once-in-a-lifetime opportunity to watch an old star suddenly stir back into new activity after coming to the end of its normal life. Their surprising results have forced them to change their ideas of how such an old, white dwarf star can re-ignite its nuclear furnace for one final blast of energy. Sakurai's Object Radio/Optical Images of Sakurai's Object: Color image shows nebula ejected thousands of years ago. Contours indicate radio emission. Inset is Hubble Space Telescope image, with contours indicating radio emission; this inset shows just the central part of the region. CREDIT: Hajduk et al., NRAO/AUI/NSF, ESO, StSci, NASA Computer simulations had predicted a series of events that would follow such a re-ignition of fusion reactions, but the star didn't follow the script -- events moved 100 times more quickly than the simulations predicted. "We've now produced a new theoretical model of how this process works, and the VLA observations have provided the first evidence supporting our new model," said Albert Zijlstra, of the University of Manchester in the United Kingdom. Zijlstra and his colleagues presented their findings in the April 8 issue of the journal Science. The astronomers studied a star known as V4334 Sgr, in the constellation Sagittarius. It is better known as "Sakurai's Object," after Japanese amateur astronomer Yukio Sakurai, who discovered it on February 20, 1996, when it suddenly burst into new brightness. At first, astronomers thought the outburst was a common nova explosion, but further study showed that Sakurai's Object was anything but common. The star is an old white dwarf that had run out of hydrogen fuel for nuclear fusion reactions in its core. Astronomers believe that some such stars can undergo a final burst of fusion in a shell of helium that surrounds a core of heavier nuclei such as carbon and oxygen. However, the

  11. Twofold processing for denoising ultrasound medical images.

    Science.gov (United States)

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  12. JIP: Java image processing on the Internet

    Science.gov (United States)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  13. Image exploitation and dissemination prototype of distributed image processing

    International Nuclear Information System (INIS)

    Batool, N.; Huqqani, A.A.; Mahmood, A.

    2003-05-01

    Image processing applications requirements can be best met by using the distributed environment. This report presents to draw inferences by utilizing the existed LAN resources under the distributed computing environment using Java and web technology for extensive processing to make it truly system independent. Although the environment has been tested using image processing applications, its design and architecture is truly general and modular so that it can be used for other applications as well, which require distributed processing. Images originating from server are fed to the workers along with the desired operations to be performed on them. The Server distributes the task among the Workers who carry out the required operations and send back the results. This application has been implemented using the Remote Method Invocation (RMl) feature of Java. Java RMI allows an object running in one Java Virtual Machine (JVM) to invoke methods on another JVM thus providing remote communication between programs written in the Java programming language. RMI can therefore be used to develop distributed applications [1]. We undertook this project to gain a better understanding of distributed systems concepts and its uses for resource hungry jobs. The image processing application is developed under this environment

  14. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Document Examination: Applications of Image Processing Systems.

    Science.gov (United States)

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  16. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  17. Fundamental concepts of digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  18. Fundamental Concepts of Digital Image Processing

    Science.gov (United States)

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  19. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  20. Image processing of angiograms: A pilot study

    Science.gov (United States)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  1. PCB Fault Detection Using Image Processing

    Science.gov (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.

    2017-08-01

    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  2. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  3. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  4. REMOTE SENSING IMAGE QUALITY ASSESSMENT EXPERIMENT WITH POST-PROCESSING

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2018-04-01

    Full Text Available This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  5. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  6. On astronomical drawing [1846

    Science.gov (United States)

    Smyth, Charles Piazzi

    Reprinted from the Memoirs of the Royal Astronomical Society 15, 1846, pp. 71-82. With annotations and illustrations added by Klaus Hentschel. The activities of the Astronomer Royal for Scotland, Charles Piazzi Smyth (1819-1900), include the triangulation of South African districts, landscape painting, day-to-day or tourist sketching, the engraving and lithographing of prominent architectural sites, the documentary photography of the Egyptian pyramids or the Tenerife Dragon tree, and `instant photographs' of the clouds above his retirement home in Clova, Ripon. His colorful records of the aurora polaris, and solar and terrestrial spectra all profited from his trained eye and his subtle mastery of the pen and the brush. As his paper on astronomical drawing, which we chose to reproduce in this volume, amply demonstrates, he was conversant in most of the print technology repertoire that the 19th century had to offer, and carefully selected the one most appropriate to each sujet. For instance, he chose mezzotint for the plates illustrating Maclear's observations of Halley's comet in 1835/36, so as to achieve a ``rich profundity of shadows, the deep obscurity of which is admirably adapted to reproduce those fine effects of chiaroscuro frequently found in works where the quantity of dark greatly predominates.'' The same expertise with which he tried to emulate Rembrandt's chiaroscuro effects he applied to assessing William and John Herschel's illustrations of nebulae, which appeared in print between 1811 and 1834. William Herschel's positive engraving, made partly by stippling and partly by a coarse mezzotint, receives sharp admonishment because of the visible ruled crossed lines in the background and the fact that ``the objects, which are also generally too light, [have] a much better definition than they really possess.'' On the other hand, John Herschel's illustration of nebulae and star clusters, given in negative, ``in which the lights are the darkest part of the

  7. Astronomical Instruments in India

    Science.gov (United States)

    Sarma, Sreeramula Rajeswara

    The earliest astronomical instruments used in India were the gnomon and the water clock. In the early seventh century, Brahmagupta described ten types of instruments, which were adopted by all subsequent writers with minor modifications. Contact with Islamic astronomy in the second millennium AD led to a radical change. Sanskrit texts began to lay emphasis on the importance of observational instruments. Exclusive texts on instruments were composed. Islamic instruments like the astrolabe were adopted and some new types of instruments were developed. Production and use of these traditional instruments continued, along with the cultivation of traditional astronomy, up to the end of the nineteenth century.

  8. IAU Public Astronomical Organisations Network

    Science.gov (United States)

    Canas, Lina; Cheung, Sze Leung

    2015-08-01

    The Office for Astronomy Outreach has devoted intensive means to create and support a global network of public astronomical organisations around the world. Focused on bringing established and newly formed amateur astronomy organizations together, providing communications channels and platforms for disseminating news to the global community and the sharing of best practices and resources among these associations around the world. In establishing the importance that these organizations have for the dissemination of activities globally and acting as key participants in IAU various campaigns social media has played a key role in keeping this network engaged and connected. Here we discuss the implementation process of maintaining this extensive network, the processing and gathering of information and the interactions between local active members at a national and international level.

  9. Image processing of early gastric cancer cases

    International Nuclear Information System (INIS)

    Inamoto, Kazuo; Umeda, Tokuo; Inamura, Kiyonari

    1992-01-01

    Computer image processing was used to enhance gastric lesions in order to improve the detection of stomach cancer. Digitization was performed in 25 cases of early gastric cancer that had been confirmed surgically and pathologically. The image processing consisted of grey scale transformation, edge enhancement (Sobel operator), and high-pass filtering (unsharp masking). Grey scale transformation improved image quality for the detection of gastric lesions. The Sobel operator enhanced linear and curved margins, and consequently, suppressed the rest. High-pass filtering with unsharp masking was superior to visualization of the texture pattern on the mucosa. Eight of 10 small lesions (less than 2.0 cm) were successfully demonstrated. However, the detection of two lesions in the antrum, was difficult even with the aid of image enhancement. In the other 15 lesions (more than 2.0 cm), the tumor surface pattern and margin between the tumor and non-pathological mucosa were clearly visualized. Image processing was considered to contribute to the detection of small early gastric cancer lesions by enhancing the pathological lesions. (author)

  10. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  11. Conceptualization, Cognitive Process between Image and Word

    Directory of Open Access Journals (Sweden)

    Aurel Ion Clinciu

    2009-12-01

    Full Text Available The study explores the process of constituting and organizing the system of concepts. After a comparative analysis of image and concept, conceptualization is reconsidered through raising for discussion the relations of concept with image in general and with self-image mirrored in body schema in particular. Taking into consideration the notion of mental space, there is developed an articulated perspective on conceptualization which has the images of mental space at one pole and the categories of language and operations of thinking at the other pole. There are explored the explicative possibilities of the notion of Tversky’s diagrammatic space as an element which is necessary to understand the genesis of graphic behaviour and to define a new construct, graphic intelligence.

  12. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  13. Intensity-dependent point spread image processing

    International Nuclear Information System (INIS)

    Cornsweet, T.N.; Yellott, J.I.

    1984-01-01

    There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction

  14. Image processing in radiology. Current applications

    International Nuclear Information System (INIS)

    Neri, E.; Caramella, D.; Bartolozzi, C.

    2008-01-01

    Few fields have witnessed such impressive advances as image processing in radiology. The progress achieved has revolutionized diagnosis and greatly facilitated treatment selection and accurate planning of procedures. This book, written by leading experts from many countries, provides a comprehensive and up-to-date description of how to use 2D and 3D processing tools in clinical radiology. The first section covers a wide range of technical aspects in an informative way. This is followed by the main section, in which the principal clinical applications are described and discussed in depth. To complete the picture, a third section focuses on various special topics. The book will be invaluable to radiologists of any subspecialty who work with CT and MRI and would like to exploit the advantages of image processing techniques. It also addresses the needs of radiographers who cooperate with clinical radiologists and should improve their ability to generate the appropriate 2D and 3D processing. (orig.)

  15. Post-processing of digital images.

    Science.gov (United States)

    Perrone, Luca; Politi, Marco; Foschi, Raffaella; Masini, Valentina; Reale, Francesca; Costantini, Alessandro Maria; Marano, Pasquale

    2003-01-01

    Post-processing of bi- and three-dimensional images plays a major role for clinicians and surgeons in both diagnosis and therapy. The new spiral (single and multislice) CT and MRI machines have allowed better quality of images. With the associated development of hardware and software, post-processing has become indispensable in many radiologic applications in order to address precise clinical questions. In particular, in CT the acquisition technique is fundamental and should be targeted and optimized to obtain good image reconstruction. Multiplanar reconstructions ensure simple, immediate display of sections along different planes. Three-dimensional reconstructions include numerous procedures: multiplanar techniques as maximum intensity projections (MIP); surface rendering techniques as the Shaded Surface Display (SSD); volume techniques as the Volume Rendering Technique; techniques of virtual endoscopy. In surgery computer-aided techniques as the neuronavigator, which with information provided by neuroimaging helps the neurosurgeon in simulating and performing the operation, are extremely interesting.

  16. Speckle pattern processing by digital image correlation

    Directory of Open Access Journals (Sweden)

    Gubarev Fedor

    2016-01-01

    Full Text Available Testing the method of speckle pattern processing based on the digital image correlation is carried out in the current work. Three the most widely used formulas of the correlation coefficient are tested. To determine the accuracy of the speckle pattern processing, test speckle patterns with known displacement are used. The optimal size of a speckle pattern template used for determination of correlation and corresponding the speckle pattern displacement is also considered in the work.

  17. Grigor Narekatsi's astronomical insights

    Science.gov (United States)

    Poghosyan, Samvel

    2015-07-01

    What stand out in the solid system of Gr. Narekatsi's naturalistic views are his astronomical insights on the material nature of light, its high speed and the Sun being composed of "material air". Especially surprising and fascinating are his views on stars and their clusters. What astronomers, including great Armenian academician V. Ambartsumian (scattering of stellar associations), would understand and prove with much difficulty thousand years later, Narekatsi predicted in the 10th century: "Stars appear and disappear untimely", "You who gather and scatter the speechless constellations, like a flock of sheep". Gr. Narekatsti's reformative views were manifested in all the spheres of the 10th century social life; he is a reformer of church life, great language constructor, innovator in literature and music, freethinker in philosophy and science. His ideology is the reflection of the 10th century Armenian Renaissance. During the 9th-10th centuries, great masses of Armenians, forced to migrate to the Balkans, took with them and spread reformative ideas. The forefather of the western science, which originated in the period of Reformation, is considered to be the great philosopher Nicholas of Cusa. The study of Gr. Narekatsti's logic and naturalistic views enables us to claim that Gr. Narekatsti is the great grandfather of European science.

  18. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various samples. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  19. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various materials. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  20. Optimisation in signal and image processing

    CERN Document Server

    Siarry, Patrick

    2010-01-01

    This book describes the optimization methods most commonly encountered in signal and image processing: artificial evolution and Parisian approach; wavelets and fractals; information criteria; training and quadratic programming; Bayesian formalism; probabilistic modeling; Markovian approach; hidden Markov models; and metaheuristics (genetic algorithms, ant colony algorithms, cross-entropy, particle swarm optimization, estimation of distribution algorithms, and artificial immune systems).

  1. Process for making lyophilized radiographic imaging kit

    International Nuclear Information System (INIS)

    Grogg, T.W.; Bates, P.E.; Bugaj, J.E.

    1985-01-01

    A process for making a lyophilized composition useful for skeletal imaging whereby an aqueous solution containing an ascorbate, gentisate, or reductate stabilizer is contacted with tin metal or an alloy containing tin and, thereafter, lyophilized. Preferably, such compositions also comprise a tissue-specific carrier and a stannous compound. It is particularly preferred to incorporate stannous oxide as a coating on the tin metal

  2. Getting Astronomers Involved in the IYA: Astronomer in the Classroom

    Science.gov (United States)

    Koenig, Kris

    2008-05-01

    The Astronomer in the Classroom program provides professional astronomers the opportunity to engage with 3rd-12th grade students across the nation in grade appropriate discussions of their recent research, and provides students with rich STEM content in a personalized forum, bringing greater access to scientific knowledge for underserved populations. 21st Century Learning and Interstellar Studios, the producer of the 400 Years of the Telescope documentary along with their educational partners, will provide the resources necessary to facilitate the Astronomer in the Classroom program, allowing students to interact with astronomers throughout the IYA2009. PROGRAM DESCRIPTION One of hundreds of astronomers will be available to interact with students via live webcast daily during Spring/Fall 2009. The astronomer for the day will conduct three 20-minute discussions (Grades 3-5 /6-8/9-12), beginning with a five-minute PowerPoint on their research or area of interest. The discussion will be followed by a question and answer period. The students will participate in real-time from their school computer(s) with the technology provided by 21st Century Learning. They will see and hear the astronomer on their screen, and pose questions from their keyboard. Teachers will choose from three daily sessions; 11:30 a.m., 12:00 p.m., 12:30 p.m. Eastern Time. This schedule overlaps all US time zones, and marginalizes bandwidth usage, preventing technological barriers to web access. The educational partners and astronomers will post materials online, providing easy access to information that will prepare teachers and students for the chosen discussion. The astronomers, invited to participate from the AAS and IAU, will receive a web cam shipment with instructions, a brief training and conductivity test, and prepaid postage for shipment of the web cam to the next astronomer on the list. The anticipated astronomer time required is 3-hours, not including the time to develop the PowerPoint.

  3. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  4. Early results from the Infrared Astronomical Satellite

    International Nuclear Information System (INIS)

    Neugebauer, G.; Beichman, C.A.; Soifer, B.T.

    1984-01-01

    For 10 months the Infrared Astronomical Satellite (IRAS) provided astronomers with what might be termed their first view of the infrared sky on a clear, dark night. Without IRAS, atmospheric absorption and the thermal emission from both the atmosphere and Earthbound telescopes make the task of the infrared astronomer comparable to what an optical astronomer would face if required to work only on cloudy afternoons. IRAS observations are serving astronomers in the same manner as the photographic plates of the Palomar Observatory Sky Survey; just as the optical survey has been used by all astronomers for over three decades, as a source of quantitative information about the sky and as a roadmap for future observations, the results of IRAS will be studied for years to come. IRAS has demonstrated the power of infrared astronomy from space. Already, from a brief look at a miniscule fraction of the data available, we have learned much about the solar system, about nearby stars, about the Galaxy as a whole and about distant extragalactic systems. Comets are much dustier than previously thought. Solid particles, presumably the remnants of the star-formation process, orbit around Vega and other stars and may provide the raw material for planetary systems. Emission from cool interstellar material has been traced throughout the Galaxy all the way to the galactic poles. Both the clumpiness and breadth of the distribution of this material were previously unsuspected. The far-infrared sky away from the galactic plane has been found to be dominate by spiral galaxies, some of which emit more than 50% and as much as 98% of their energy in the infrared - an exciting and surprising revelation. The IRAS mission is clearly the pathfinder for future mission that, to a large extent, will be devoted to the discoveries revealed by IRAS. 8 figures

  5. Image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Manninen, H.; Partanen, K.; Lehtovirta, J.; Matsi, P.; Soimakallio, S.

    1992-01-01

    The usefulness of digital image processing of chest radiographs was evaluated in a clinical study. In 54 patients, chest radiographs in the posteroanterior projection were obtained by both 14 inch digital image intensifier equipment and the conventional screen-film technique. The digital radiographs (512x512 image format) viewed on a 625 line monitor were processed in 3 different ways: 1.standard display; 2.digital edge enhancement for the standard display; 3.inverse intensity display. The radiographs were interpreted independently by 3 radiologists. Diagnoses were confirmed by CT, follow-up radiographs and clinical records. Chest abnormalities of the films analyzed included 21 primary lung tumors, 44 pulmonary nodules, 16 cases with mediastinal disease, 17 with pneumonia /atelectasis. Interstitial lung disease, pleural plaques, and pulmonary emphysema were found in 30, 18 and 19 cases respectively. Sensitivity of conventional radiography when averaged overall findings was better than that of digital techniques (P<0.001). Differences in diagnostic accuracy measured by sensitivity and specificity between the 3 digital display modes were small. Standard image display showed better sensitivity for pulmonary nodules (0.74 vs 0.66; P<0.05) but poorer specificity for pulmonary emphysema (0.85 vs 0.93; P<0.05) compared with inverse intensity display. It is concluded that when using 512x512 image format, the routine use of digital edge enhancement and tone reversal at digital chest radiographs is not warranted. (author). 12 refs.; 4 figs.; 2 tabs

  6. Processing Infrared Images For Fire Management Applications

    Science.gov (United States)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  7. International astronomical remote present observation on IRC.

    Science.gov (United States)

    Ji, Kaifan; Cao, Wenda; Song, Qian

    On March 6 - 7, 1997, an international astronomical remote present observation (RPO) was made on Internet Relay Chat (IRC) for the first time. Seven groups in four countries, China, United States, Canada and Great Britain, used the 1 meter telescope of Yunnan observatory together by the way of remote present observation. Within minutes, images were "On-line" by FTP, and every one was able to get them by anonymous ftp and discuss them on IRC from different widely separated sites.

  8. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  9. [Digital thoracic radiology: devices, image processing, limits].

    Science.gov (United States)

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  10. "Movie Star" Acting Strangely, Radio Astronomers Find

    Science.gov (United States)

    1999-01-01

    Astronomers have used the National Science Foundation's Very Long Baseline Array (VLBA) radio telescope to make the first-ever time-lapse "movie" showing details of gas motions around a star other than our Sun. The study, the largest observational project yet undertaken using Very Long Baseline Interferometry, has produced surprising results that indicate scientists do not fully understand stellar atmospheres. The "movie" shows that the atmosphere of a pulsating star more than 1,000 light-years away continues to expand during a part of the star's pulsation period in which astronomers expected it to start contracting. Philip Diamond and Athol Kemball, of the National Radio Astronomy Observatory (NRAO) in Socorro, New Mexico, announced their findings at the American Astronomical Society's meeting in Austin, TX, today. "The continued expansion we're seeing contradicts current theoretical models for how these stars work," Diamond said. "The models have assumed spherical symmetry in the star's atmosphere, and our movie shows that this is not the case. Such models suggest that a shock wave passes outward from the star. Once it's passed, then the atmosphere should begin to contract because of the star's gravity. We've long passed that point and the contraction has not begun." The time-lapse images show that the gas motions are not uniform around the star. Most of the motion is that of gas moving directly outward from the star's surface. However, in about one-fourth of the ring, there are peculiar motions that do not fit this pattern. The scientists speculate that the rate of mass loss may not be the same from all parts of the star's surface. "A similar star behaved as predicted when studied a few years ago, so we're left to wonder what's different about this one," Diamond said. "Right now, we think that different rates of mass loss in the two stars may be the cause of the difference. This star is losing mass at 100 times the rate of the star in the earlier study." "This

  11. ARMA processing for NDE ultrasonic imaging

    International Nuclear Information System (INIS)

    Pao, Y.H.; El-Sherbini, A.

    1984-01-01

    This chapter describes a new method for acoustic image reconstruction for an active multiple sensor system operating in the reflection mode in the Fresnel region. The method is based on the use of an ARMA model for the reconstruction process. Algorithms for estimating the model parameters are presented and computer simulation results are shown. The AR coefficients are obtained independently of the MA coefficients. It is shown that when the ARMA reconstruction method is augmented with the multifrequency approach, it can provide a three-dimensional reconstructed image with high lateral and range resolutions, high signal to noise ratio and reduced sidelobe levels. The proposed ARMA reconstruction method results in high quality images and better performance than that obtainable with conventional methods. The advantages of the method are very high lateral resolution with a limited number of sensors, reduced sidelobes level, and high signal to noise ratio

  12. MIDAS - ESO's new image processing system

    Science.gov (United States)

    Banse, K.; Crane, P.; Grosbol, P.; Middleburg, F.; Ounnas, C.; Ponz, D.; Waldthausen, H.

    1983-03-01

    The Munich Image Data Analysis System (MIDAS) is an image processing system whose heart is a pair of VAX 11/780 computers linked together via DECnet. One of these computers, VAX-A, is equipped with 3.5 Mbytes of memory, 1.2 Gbytes of disk storage, and two tape drives with 800/1600 bpi density. The other computer, VAX-B, has 4.0 Mbytes of memory, 688 Mbytes of disk storage, and one tape drive with 1600/6250 bpi density. MIDAS is a command-driven system geared toward the interactive user. The type and number of parameters in a command depends on the unique parameter invoked. MIDAS is a highly modular system that provides building blocks for the undertaking of more sophisticated applications. Presently, 175 commands are available. These include the modification of the color-lookup table interactively, to enhance various image features, and the interactive extraction of subimages.

  13. Illuminating magma shearing processes via synchrotron imaging

    Science.gov (United States)

    Lavallée, Yan; Cai, Biao; Coats, Rebecca; Kendrick, Jackie E.; von Aulock, Felix W.; Wallace, Paul A.; Le Gall, Nolwenn; Godinho, Jose; Dobson, Katherine; Atwood, Robert; Holness, Marian; Lee, Peter D.

    2017-04-01

    Our understanding of geomaterial behaviour and processes has long fallen short due to inaccessibility into material as "something" happens. In volcanology, research strategies have increasingly sought to illuminate the subsurface of materials at all scales, from the use of muon tomography to image the inside of volcanoes to the use of seismic tomography to image magmatic bodies in the crust, and most recently, we have added synchrotron-based x-ray tomography to image the inside of material as we test it under controlled conditions. Here, we will explore some of the novel findings made on the evolution of magma during shearing. These will include observations and discussions of magma flow and failure as well as petrological reaction kinetics.

  14. ISO Results Presented at International Astronomical Union

    Science.gov (United States)

    1997-08-01

    Some of the work being presented is collected in the attached ESA Information Note N 25-97, ISO illuminates our cosmic ancestry. A set of six colour images illustrating various aspects have also been released and are available at http://www.estec.esa.nl/spdwww/iso1808.htm or in hard copy from ESA Public Relations Paris (fax:+33.1.5369.7690). These pictures cover: 1. Distant but powerful infrared galaxies 2. A scan across the milky way 3. Helix nebula: the shroud of a dead star 4. Supernova remnant Cassiopeia A 5. Trifid nebula: a dusty birthplace of stars 6. Precursors of stars and planets The International Astronomical Union provides a forum where astronomers from all over the world can develop astronomy in all its aspects through international co-operation. General Assemblies are held every three years. It is expected that over 1600 astronomers will attend this year's meeting, which is being held in Kyoto, Japan from 18-30 August. Further information on the meeting can be found at: www.tenmon.or.jp/iau97/ . ISO illuminates our cosmic ancestry The European Space Agency's Infrared Space Observatory, ISO, is unmatched in its ability to explore and analyse many of the universal processes that made our existence possible. We are children of the stars. Every atom in our bodies was created in cosmic space and delivered to the Sun's vicinity in time for the Earth's formation, during a ceaseless cycle of birth, death and rebirth among the stars. The most creative places in the sky are cool and dusty, and opaque even to the Hubble Space Telescope. Infrared rays penetrating the dust reveal to ISO hidden objects, and the atoms and molecules of cosmic chemistry. "ISO is reading Nature's recipe book," says Roger Bonnet, ESA's director of science. "As the world's only telescope capable of observing the Universe over a wide range of infrared wavelengths, ISO plays an indispensable part in astronomical discoveries that help to explain how we came to exist." This Information Note

  15. Reducing the Requirements and Cost of Astronomical Telescopes

    Science.gov (United States)

    Smith, W. Scott; Whitakter, Ann F. (Technical Monitor)

    2002-01-01

    Limits on astronomical telescope apertures are being rapidly approached. These limits result from logistics, increasing complexity, and finally budgetary constraints. In an historical perspective, great strides have been made in the area of aperture, adaptive optics, wavefront sensors, detectors, stellar interferometers and image reconstruction. What will be the next advances? Emerging data analysis techniques based on communication theory holds the promise of yielding more information from observational data based on significant computer post-processing. This paper explores some of the current telescope limitations and ponders the possibilities increasing the yield of scientific data based on the migration computer post-processing techniques to higher dimensions. Some of these processes hold the promise of reducing the requirements on the basic telescope hardware making the next generation of instruments more affordable.

  16. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  17. Astronomical theory of climate change

    Energy Technology Data Exchange (ETDEWEB)

    Berger, A.; Loutre, M.F. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium). Inst. d' Astronomie et de Geophysique G. Lemaitre

    2004-12-01

    The astronomical theory of paleo-climates aims to explain the climatic variations occurring with quasi-periodicities lying between tens and hundreds of thousands of years. The origin of these quasi-cycles lies in the astronomically driven changes in the latitudinal and seasonal distributions of the energy that the Earth receives from the Sun. These changes are then amplified by the feedback mechanisms which characterize the natural behaviour of the climate system like those involving the albedo-, the water vapor-, and the vegetation- temperature relationships. Climate models of different complexities are used to explain the chain of processes which finally link the long-term variations of three astronomical parameters to the long-term climatic variations at time scale of tens to hundreds of thousands of years. In particular, sensitivity analysis to the astronomically driven insolation changes and to the CO{sub 2} atmospheric concentrations have been performed with the 2-dimension climate model of Louvain-la-Neuve. It could be shown that this model simulates more or less correctly the entrance into glaciation around 2.75 million year (Myr) BP (before present), the late Pliocene-early Pleistocene 41-kyr (thousand years) cycle, the emergence of the 100-kyr cycle around 850 kyr BP and the glacial-interglacial cycles of the last 600 kyr. During the Late Pliocene (in an ice-free - warm world) ice sheets can only develop during times of sufficiently low summer insolation. This occurs during large eccentricity times when climatic precession and obliquity combine to obtain such low values, leading to the 41-kyr period between 3 and 1 million years BP. On the contrary in a glacial world, ice sheets persist most of the time except when insolation is very high in polar latitudes, requiring large eccentricity again, but leading this time to interglacial and finally to the 100-kyr period of the last 1 Myr. Using CO{sub 2} scenarios, it has been shown that stage 11 and stage 1

  18. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    International Nuclear Information System (INIS)

    Sensakovic, William F.; O'Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-01-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA"2 by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing

  19. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)

    2016-10-15

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  20. Automated synthesis of image processing procedures using AI planning techniques

    Science.gov (United States)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  1. TPCs in high-energy astronomical polarimetry

    International Nuclear Information System (INIS)

    Black, J K

    2007-01-01

    High-energy astrophysics has yet to exploit the unique and important information that polarimetry could provide, largely due to the limited sensitivity of previously available polarimeters. In recent years, numerous efforts have been initiated to develop instruments with the sensitivity required for astronomical polarimetry over the 100 eV to 10 GeV band. Time projection chambers (TPCs), with their high-resolution event imaging capability, are an integral part of some of these efforts. After a brief overview of current astronomical polarimeter development efforts, the role of TPCs will be described in more detail. These include TPCs as photoelectric X-ray polarimeters and TPCs as components of polarizationsensitive Compton and pair-production telescopes

  2. The data analysis facilities that astronomers want

    International Nuclear Information System (INIS)

    Disney, M.

    1985-01-01

    This paper discusses the need and importance of data analysis facilities and what astronomers ideally want. A brief survey is presented of what is available now and some of the main deficiencies and problems with today's systems are discussed. The main sources of astronomical data are presented incuding: optical photographic, optical TV/CCD, VLA, optical spectros, imaging x-ray satellite, and satellite planetary camera. Landmark discoveries are listed in a table, some of which include: our galaxy as an island, distance to stars, H-R diagram (stellar structure), size of our galaxy, and missing mass in clusters. The main problems at present are discussed including lack of coordinated effort and central planning, differences in hardware, and measuring performance

  3. Book Review: Scientific Writing for Young Astronomers

    Science.gov (United States)

    Uyttenhove, Jos

    2011-12-01

    EDP Sciences, Les Ulis, France. Part 1 : 162 pp. € 35 ISBN 978-2-7598-0506-8 Part 2 : 298 pp. € 60 ISBN 978-2-7598-0639-3 The journal Astronomy & Astrophysics (A&A) and EDP Sciences decided in 2007 to organize a School on the various aspects of scientific writing and publishing. In 2008 and 2009 Scientific Writing for Young Astronomers (SWYA) Schools were held in Blankenberge (B) under the direction of Christiaan Sterken (FWO-VUB). These two books (EAS publication series, Vol. 49 and 50) reflect the outcome of these Schools. Part 1 contains a set of contributions that discuss various aspects of scientific publication; it includes A&A Editors' view of the peer review and publishing process. A very interesting short paper by S.R. Pottasch (Kapteyn Astronomical Institute, Groningen, and one of the two first Editors-in Chief of A&A) deals with the history of the creation of the journal Astronomy & Astrophysics. Two papers by J. Adams et al. (Observatoire de Paris) discuss language editing, including a detailed guide for any non-native user of the English language. In 2002 the Board of Directors decided that all articles in A&A must be written in clear and correct English. Part 2 consists of three very extensive and elaborated papers by Christiaan Sterken, supplying guidelines to PhD students and postdoctoral fellows to help them compose scientific papers for different forums (journals, proceedings, thesis, etc.). This part is of interest not only for young astronomers but it is very useful for scholars of all ages and disciplines. Paper I "The writing process" (60 pp.) copes with the preparation of manuscripts, with communicating with editors and referees and with avoiding common errors. Delicate problems on authorship, refereeing, revising multi-authored papers etc. are treated in 26 FAQ's. Paper II "Communication by graphics" (120 pp.) is entirely dedicated to the important topic of communication with images, graphs, diagrams, tables etc. Design types of graphs

  4. Imprecise Arithmetic for Low Power Image Processing

    DEFF Research Database (Denmark)

    Albicocco, Pietro; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2012-01-01

    Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, additio...... and multiplication, in an imprecise manner by simplifying the hardware implementation. With the proposed ”sloppy” operations, we obtain a reduction in delay, area and power dissipation, and the error introduced is still acceptable for applications such as image processing.......Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, addition...

  5. Closing remarks: astronomical

    International Nuclear Information System (INIS)

    Pecker, J.-C.

    1990-01-01

    During the discussions we have covered many facets of the basic interactions between solar activity and the Earth's climate. Solar activity is not the only astronomical or astrophysical phenomenon to influence physical conditions in the biosphere. Over timescales an order of magnitude less, the location of the Solar System in the Galaxy may have influenced life on Earth. The Sun is a complex generator of radiation, particles and magnetic fields sent far into space. Even if the total radiation emitted is constant, the amount of radiation received by the Earth changes with time: this change is complex and involved with a redistribution of energy emitted at different solar latitudes than to a real change in solar luminosity. These changes in the Earth's illumination may be a function of the wavelength, and have various effects in different layers of the Earth's atmosphere. The Sun also emits particles of all energies. Some of them find their way through the magnetopause, giving rise to auroras, to magnetic storms, to ionospheric disturbances and the like. The possible climatological effects are yet obscure. To understand the solar terrestrial relations better, regular, routine observations from ground-based stations and from space of solar phenomena must be continued. The sensitivity of human life to small changes in climatic conditions is very large. A good knowledge of solar-physics is therefore important and relevant. (author)

  6. Astronomical Surveys and Big Data

    Directory of Open Access Journals (Sweden)

    Mickaelian Areg M.

    2016-03-01

    Full Text Available Recent all-sky and large-area astronomical surveys and their catalogued data over the whole range of electromagnetic spectrum, from γ-rays to radio waves, are reviewed, including such as Fermi-GLAST and INTEGRAL in γ-ray, ROSAT, XMM and Chandra in X-ray, GALEX in UV, SDSS and several POSS I and POSS II-based catalogues (APM, MAPS, USNO, GSC in the optical range, 2MASS in NIR, WISE and AKARI IRC in MIR, IRAS and AKARI FIS in FIR, NVSS and FIRST in radio range, and many others, as well as the most important surveys giving optical images (DSS I and II, SDSS, etc., proper motions (Tycho, USNO, Gaia, variability (GCVS, NSVS, ASAS, Catalina, Pan-STARRS, and spectroscopic data (FBS, SBS, Case, HQS, HES, SDSS, CALIFA, GAMA. An overall understanding of the coverage along the whole wavelength range and comparisons between various surveys are given: galaxy redshift surveys, QSO/AGN, radio, Galactic structure, and Dark Energy surveys. Astronomy has entered the Big Data era, with Astrophysical Virtual Observatories and Computational Astrophysics playing an important role in using and analyzing big data for new discoveries.

  7. VEGAS: VErsatile GBT Astronomical Spectrometer

    Science.gov (United States)

    Bussa, Srikanth; VEGAS Development Team

    2012-01-01

    The National Science Foundation Advanced Technologies and Instrumentation (NSF-ATI) program is funding a new spectrometer backend for the Green Bank Telescope (GBT). This spectrometer is being built by the CICADA collaboration - collaboration between the National Radio Astronomy Observatory (NRAO) and the Center for Astronomy Signal Processing and Electronics Research (CASPER) at the University of California Berkeley.The backend is named as VErsatile GBT Astronomical Spectrometer (VEGAS) and will replace the capabilities of the existing spectrometers. This backend supports data processing from focal plane array systems. The spectrometer will be capable of processing up to 1.25 GHz bandwidth from 8 dual polarized beams or a bandwidth up to 10 GHz from a dual polarized beam.The spectrometer will be using 8-bit analog to digital converters (ADC), which gives a better dynamic range than existing GBT spectrometers. There will be 8 tunable digital sub-bands within the 1.25 GHz bandwidth, which will enhance the capability of simultaneous observation of multiple spectral transitions. The maximum spectral dump rate to disk will be about 0.5 msec. The vastly enhanced backend capabilities will support several science projects with the GBT. The projects include mapping temperature and density structure of molecular clouds; searches for organic molecules in the interstellar medium; determination of the fundamental constants of our evolving Universe; red-shifted spectral features from galaxies across cosmic time and survey for pulsars in the extreme gravitational environment of the Galactic Center.

  8. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  9. Digital signal and image processing using Matlab

    CERN Document Server

    Blanchet , Gérard

    2015-01-01

    The most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals, the theory being supported by exercises and computer simulations relating to real applications.   More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.  Following on from the first volume, this second installation takes a more practical stance, provi

  10. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  11. Digital signal and image processing using MATLAB

    CERN Document Server

    Blanchet , Gérard

    2014-01-01

    This fully revised and updated second edition presents the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications. More than 200 programs and functions are provided in the MATLABÒ language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject. This fully revised new edition updates : - the

  12. Image processing in 60Co container inspection system

    International Nuclear Information System (INIS)

    Wu Zhifang; Zhou Liye; Wang Liqiang; Liu Ximing

    1999-01-01

    The authors analyzes the features of 60 Co container inspection image, the design of several special processing methods for container image and some normal processing methods for two-dimensional digital image, including gray enhancement, pseudo-enhancement, space filter, edge enhancement, geometry process, etc. It gives out the way to carry out the above mentioned process in Windows 95 or Win NT. It discusses some ways to improve the image processing speed on microcomputer and good results were obtained

  13. Image processing to optimize wave energy converters

    Science.gov (United States)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  14. Using commercial amateur astronomical spectrographs

    CERN Document Server

    Hopkins, Jeffrey L

    2014-01-01

    Amateur astronomers interested in learning more about astronomical spectroscopy now have the guide they need. It provides detailed information about how to get started inexpensively with low-resolution spectroscopy, and then how to move on to more advanced  high-resolution spectroscopy. Uniquely, the instructions concentrate very much on the practical aspects of using commercially-available spectroscopes, rather than simply explaining how spectroscopes work. The book includes a clear explanation of the laboratory theory behind astronomical spectrographs, and goes on to extensively cover the practical application of astronomical spectroscopy in detail. Four popular and reasonably-priced commercially available diffraction grating spectrographs are used as examples. The first is a low-resolution transmission diffraction grating, the Star Analyser spectrograph. The second is an inexpensive fiber optic coupled bench spectrograph that can be used to learn more about spectroscopy. The third is a newcomer, the ALPY ...

  15. Astronomical databases of Nikolaev Observatory

    Science.gov (United States)

    Protsyuk, Y.; Mazhaev, A.

    2008-07-01

    Several astronomical databases were created at Nikolaev Observatory during the last years. The databases are built by using MySQL search engine and PHP scripts. They are available on NAO web-site http://www.mao.nikolaev.ua.

  16. Astronomical Instrumentation System Markup Language

    Science.gov (United States)

    Goldbaum, Jesse M.

    2016-05-01

    The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.

  17. The South African astronomical observatory

    International Nuclear Information System (INIS)

    Feast, M.

    1985-01-01

    A few examples of the activities of the South African Astronomical Observatory are discussed. This includes the studying of stellar evolution, dust around stars, the determination of distances to galaxies and collaboration with space experiments

  18. Processing computed tomography images by using personal computer

    International Nuclear Information System (INIS)

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.

    1994-01-01

    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  19. Sixteenth Century Astronomical Telescopy

    Science.gov (United States)

    Usher, P. D.

    2001-12-01

    Ophelia in Shakespeare's Hamlet is named for the ``moist star" which in mythology is the partner of Hamlet's royal Sun. Together the couple seem destined to rule on earth just as their celestial counterparts rule the heavens, but the tragedy is that they are afflicted, just as the Sun and Moon are blemished. In 1.3 Laertes lectures Ophelia on love and chastity, describing first Cytherean phases (crescent to gibbous) and then Lunar craters. Spots mar the Sun (1.1, 3.1). Also reported are Jupiter's Red Spot (3.4) and the resolution of the Milky Way into stars (2.2). These interpretations are well-founded and support the cosmic allegory. Observations must have been made with optical aid, probably the perspective glass of Leonard Digges, father of Thomas Digges. Notably absent from Hamlet is mention of the Galilean moons, owing perhaps to the narrow field-of-view of the telescope. That discovery is later celebrated in Cymbeline, published soon after Galileo's Siderius Nuncius in 1610. In 5.4 of Cymbeline the four ghosts dance ``in imitation of planetary motions" and at Jupiter's behest place a book on the chest of Posthumus Leonatus. His name identifies the Digges father and son as the source of data in Hamlet since Jupiter's moons were discovered after the deaths of Leonard (``leon+hart") and Thomas (the ``lion's whelp"). Lines in 5.4 urge us not to read more into the book than is contained between its covers; this is understandable because Hamlet had already reported the other data in support of heliocentricism and the cosmic model discussed and depicted by Thomas Digges in 1576. I conclude therefore that astronomical telescopy began in England before the last quarter of the sixteenth century.

  20. Low level image processing techniques using the pipeline image processing engine in the flight telerobotic servicer

    Science.gov (United States)

    Nashman, Marilyn; Chaconas, Karen J.

    1988-01-01

    The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.

  1. AWOB: A Collaborative Workbench for Astronomers

    Science.gov (United States)

    Kim, J. W.; Lemson, G.; Bulatovic, N.; Makarenko, V.; Vogler, A.; Voges, W.; Yao, Y.; Kiefl, R.; Koychev, S.

    2015-09-01

    We present the Astronomers Workbench (AWOB1), a web-based collaboration and publication platform for a scientific project of any size, developed in collaboration between the Max-Planck institutes of Astrophysics (MPA) and Extra-terrestrial Physics (MPE) and the Max-Planck Digital Library (MPDL). AWOB facilitates the collaboration between geographically distributed astronomers working on a common project throughout its whole scientific life cycle. AWOB does so by making it very easy for scientists to set up and manage a collaborative workspace for individual projects, where data can be uploaded and shared. It supports inviting project collaborators, provides wikis, automated mailing lists, calendars and event notification and has a built in chat facility. It allows the definition and tracking of tasks within projects and supports easy creation of e-publications for the dissemination of data and images and other resources that cannot be added to submitted papers. AWOB extends the project concept to larger scale consortia, within which it is possible to manage working groups and sub-projects. The existing AWOB instance has so far been limited to Max-Planck members and their collaborators, but will be opened to the whole astronomical community. AWOB is an open-source project and its source code is available upon request. We intend to extend AWOB's functionality also to other disciplines, and would greatly appreciate contributions from the community.

  2. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  3. Astrobiology: An astronomer's perspective

    International Nuclear Information System (INIS)

    Bergin, Edwin A.

    2014-01-01

    In this review we explore aspects of the field of astrobiology from an astronomical viewpoint. We therefore focus on the origin of life in the context of planetary formation, with additional emphasis on tracing the most abundant volatile elements, C, H, O, and N that are used by life on Earth. We first explore the history of life on our planet and outline the current state of our knowledge regarding the delivery of the C, H, O, N elements to the Earth. We then discuss how astronomers track the gaseous and solid molecular carriers of these volatiles throughout the process of star and planet formation. It is now clear that the early stages of star formation fosters the creation of water and simple organic molecules with enrichments of heavy isotopes. These molecules are found as ice coatings on the solid materials that represent microscopic beginnings of terrestrial worlds. Based on the meteoritic and cometary record, the process of planet formation, and the local environment, lead to additional increases in organic complexity. The astronomical connections towards this stage are only now being directly made. Although the exact details are uncertain, it is likely that the birth process of star and planets likely leads to terrestrial worlds being born with abundant water and organics on the surface

  4. Astrobiology: An astronomer's perspective

    Energy Technology Data Exchange (ETDEWEB)

    Bergin, Edwin A. [University of Michigan, Department of Astronomy, 500 Church Street, Ann Arbor, MI 48109 (United States)

    2014-12-08

    In this review we explore aspects of the field of astrobiology from an astronomical viewpoint. We therefore focus on the origin of life in the context of planetary formation, with additional emphasis on tracing the most abundant volatile elements, C, H, O, and N that are used by life on Earth. We first explore the history of life on our planet and outline the current state of our knowledge regarding the delivery of the C, H, O, N elements to the Earth. We then discuss how astronomers track the gaseous and solid molecular carriers of these volatiles throughout the process of star and planet formation. It is now clear that the early stages of star formation fosters the creation of water and simple organic molecules with enrichments of heavy isotopes. These molecules are found as ice coatings on the solid materials that represent microscopic beginnings of terrestrial worlds. Based on the meteoritic and cometary record, the process of planet formation, and the local environment, lead to additional increases in organic complexity. The astronomical connections towards this stage are only now being directly made. Although the exact details are uncertain, it is likely that the birth process of star and planets likely leads to terrestrial worlds being born with abundant water and organics on the surface.

  5. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  6. Variational PDE Models in Image Processing

    National Research Council Canada - National Science Library

    Chan, Tony F; Shen, Jianhong; Vese, Luminita

    2002-01-01

    .... These include astronomy and aerospace exploration, medical imaging, molecular imaging, computer graphics, human and machine vision, telecommunication, auto-piloting, surveillance video, and biometric...

  7. Amateur astronomers in support of observing campaigns

    Science.gov (United States)

    Yanamandra-Fisher, P.

    2014-07-01

    The Pro-Am Collaborative Astronomy (PACA) project evolved from the observational campaign of C/2012 S1 or C/ISON. The success of the paradigm shift in scientific research is now implemented in other comet observing campaigns. While PACA identifies a consistent collaborative approach to pro-am collaborations, given the volume of data generated for each campaign, new ways of rapid data analysis, mining access, and storage are needed. Several interesting results emerged from the synergistic inclusion of both social media and amateur astronomers: - the establishment of a network of astronomers and related professionals that can be galvanized into action on short notice to support observing campaigns; - assist in various science investigations pertinent to the campaign; - provide an alert-sounding mechanism should the need arise; - immediate outreach and dissemination of results via our media/blogger members; - provide a forum for discussions between the imagers and modelers to help strategize the observing campaign for maximum benefit. In 2014, two new comet observing campaigns involving pro-am collaborations have been identified: (1) C/2013 A1 (C/Siding Spring) and (2) 67P/Churyumov-Gerasimenko (CG). The evolving need for individual customized observing campaigns has been incorporated into the evolution of PACA (Pro-Am Collaborative Astronomy) portal that currently is focused on comets: from supporting observing campaigns for current comets, legacy data, historical comets; interconnected with social media and a set of shareable documents addressing observational strategies; consistent standards for data; data access, use, and storage, to align with the needs of professional observers. The integration of science, observations by professional and amateur astronomers, and various social media provides a dynamic and evolving collaborative partnership between professional and amateur astronomers. The recent observation of comet 67P, at a magnitude of 21.2, from Siding

  8. Application of Java technology in radiation image processing

    International Nuclear Information System (INIS)

    Cheng Weifeng; Li Zheng; Chen Zhiqiang; Zhang Li; Gao Wenhuan

    2002-01-01

    The acquisition and processing of radiation image plays an important role in modern application of civil nuclear technology. The author analyzes the rationale of Java image processing technology which includes Java AWT, Java 2D and JAI. In order to demonstrate applicability of Java technology in field of image processing, examples of application of JAI technology in processing of radiation images of large container have been given

  9. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing

    2008-01-01

    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  10. Effects of image processing on the detective quantum efficiency

    Science.gov (United States)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  11. Radio Astronomers Get Their First Glimpse of Powerful Solar Storm

    Science.gov (United States)

    2001-08-01

    Astronomers have made the first radio-telescope images of a powerful coronal mass ejection on the Sun, giving them a long-sought glimpse of hitherto unseen aspects of these potentially dangerous events. "These observations are going to provide us with a new and unique tool for deciphering the mechanisms of coronal mass ejections and how they are related to other solar events," said Tim Bastian, an astronomer at the National Science Foundation's National Radio Astronomy Observatory (NRAO) in Charlottesville, Virginia. Radio image of coronal mass ejection; circle indicates the size and location of the Sun. White dots are where radio spectral measurements were made. Bastian, along with Monique Pick, Alain Kerdraon and Dalmiro Maia of the Paris Observatory, and Angelos Vourlidas of the Naval Research Laboratory in Washington, D.C., used a solar radio telescope in Nancay, France, to study a coronal mass ejection that occurred on April 20, 1998. Their results will be published in the September 1 edition of the Astrophysical Journal Letters. Coronal mass ejections are powerful magnetic explosions in the Sun's corona, or outer atmosphere, that can blast billions of tons of charged particles into interplanetary space at tremendous speeds. If the ejection is aimed in the direction of Earth, the speeding particles interact with our planet's magnetic field to cause auroral displays, radio-communication blackouts, and potentially damage satellites and electric-power systems. "Coronal mass ejections have been observed for many years, but only with visible-light telescopes, usually in space. While previous radio observations have provided us with powerful diagnostics of mass ejections and associated phenomena in the corona, this is the first time that one has been directly imaged in wavelengths other than visible light," Bastian said. "These new data from the radio observations give us important clues about how these very energetic events work," he added. The radio images show an

  12. Image processing system for videotape review

    International Nuclear Information System (INIS)

    Bettendroffer, E.

    1988-01-01

    In a nuclear plant, the areas in which fissile materials are stored or handled, have to be monitored continuously. One method of surveillance is to record pictures of TV cameras with determined time intervals on special video recorders. The 'time lapse' recorded tape is played back at normal speed and an inspector checks visually the pictures. This method requires much manpower and an automated method would be useful. The present report describes an automatic reviewing method based on an image processing system; the system detects scene changes in the picture sequence and stores the reduced data set on a separate video tape. The resulting reduction of reviewing time by inspector is important for surveillance data with few movements

  13. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    Science.gov (United States)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  14. Choosing and using astronomical eyepieces

    CERN Document Server

    Paolini, William

    2013-01-01

    This valuable reference fills a number of needs in the field of astronomical eyepieces, including that of a buyer's guide, observer's field guide and technical desk reference. It documents the past market for eyepieces and its evolution right up to the present day. In addition to appealing to practical astronomers - and potentially saving them money - it is useful both as a historical reference and as a detailed review of the current market place for this bustling astronomical consumer product. What distinguishes this book from other publications on astronomy is the involvement of observers from all aspects of the astronomical community, and also the major manufacturers of equipment. It not only catalogs the technical aspects of the many modern eyepieces but also documents amateur observer reactions and impressions of their utility over the years, using many different eyepieces. Eyepieces are the most talked-about accessories and collectible items available to the amateur astronomer. No other item of equi...

  15. What Lies Behind NSF Astronomer Demographics? Subjectivities of Women, Minorities and Foreign-born Astronomers within Meshworks of Big Science Astronomy

    Science.gov (United States)

    Guillen, Reynal; Gu, D.; Holbrook, J.; Murillo, L. F.; Traweek, S.

    2011-01-01

    Our current research focuses on the trajectory of scientists working with large-scale databases in astronomy, following them as they strategically build their careers, digital infrastructures, and make their epistemological commitments. We look specifically at how gender, ethnicity, nationality intersect in the process of subject formation in astronomy, as well as in the process of enrolling partners for the construction of instruments, design and implementation of large-scale databases. Work once figured as merely technical support, such assembling data catalogs, or as graphic design, generating pleasing images for public support, has been repositioned at the core of the field. Some have argued that such databases enable a new kind of scientific inquiry based on data exploration, such as the "fourth paradigm" or "data-driven" science. Our preliminary findings based on oral history interviews and ethnography provide insights into meshworks of women, African-American, "Hispanic," Asian-American and foreign-born astronomers. Our preliminary data suggest African-American men are more successful in sustaining astronomy careers than Chicano and Asian-American men. A distinctive theme in our data is the glocal character of meshworks available to and created by foreign-born women astronomers working at US facilities. Other data show that the proportion of Asian to Asian American and foreign-born Latina/o to Chicana/o astronomers is approximately equal. Futhermore, Asians and Latinas/os are represented in significantly greater numbers than Asian Americans and Chicanas/os. Among professional astronomers in the US, each ethnic minority group is numbered on the order of tens, not hundreds. Project support is provided by the NSF EAGER program to University of California, Los Angeles under award 0956589.

  16. GalileoMobile: Astronomical activities in schools

    Science.gov (United States)

    Dasi Espuig, Maria; Vasquez, Mayte; Kobel, Philippe

    GalileoMobile is an itinerant science education initiative run on a voluntary basis by an international team of astronomers, educators, and science communicators. Our team's main goal is to make astronomy accessible to schools and communities around the globe that have little or no access to outreach actions. We do this by performing teacher workshops, activities with students, and donating educational material. Since the creation of GalileoMobile in 2008, we have travelled to Chile, Bolivia, Peru, India, and Uganda, and worked with 56 schools in total. Our activities are centred on the GalileoMobile Handbook of Activities that comprises around 20 astronomical activities which we adapted from many different sources, and translated into 4 languages. The experience we gained in Chile, Bolivia, Peru, India, and Uganda taught us that (1) bringing experts from other countries was very stimulating for children as they are naturally curious about other cultures and encourages a collaboration beyond borders; (2) high-school students who were already interested in science were always very eager to interact with real astronomers doing research to ask for career advice; (3) inquiry-based methods are important to make the learning process more effective and we have therefore, re-adapted the activities in our Handbook according to these; (4) local teachers and university students involved in our activities have the potential to carry out follow-up activities, and examples are those from Uganda and India.

  17. Spot restoration for GPR image post-processing

    Science.gov (United States)

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  18. Endemic Images and the Desensitization Process.

    Science.gov (United States)

    Saigh, Philip A.; Antoun, Fouad T.

    1984-01-01

    Examined the effects of endemic images on levels of anxiety and achievement of 48 high school students. Results suggested that a combination of endemic images and study skills training was as effective as desensitization plus study skills training. Includes the endemic image questionnaire. (JAC)

  19. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  20. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    Science.gov (United States)

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  1. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    Science.gov (United States)

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  2. Quaternion Fourier transforms for signal and image processing

    CERN Document Server

    Ell, Todd A; Sangwine, Stephen J

    2014-01-01

    Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.

  3. Hybrid imaging: Instrumentation and Data Processing

    Science.gov (United States)

    Cal-Gonzalez, Jacobo; Rausch, Ivo; Shiyam Sundar, Lalith K.; Lassen, Martin L.; Muzik, Otto; Moser, Ewald; Papp, Laszlo; Beyer, Thomas

    2018-05-01

    State-of-the-art patient management frequently requires the use of non-invasive imaging methods to assess the anatomy, function or molecular-biological conditions of patients or study subjects. Such imaging methods can be singular, providing either anatomical or molecular information, or they can be combined, thus, providing "anato-metabolic" information. Hybrid imaging denotes image acquisitions on systems that physically combine complementary imaging modalities for an improved diagnostic accuracy and confidence as well as for increased patient comfort. The physical combination of formerly independent imaging modalities was driven by leading innovators in the field of clinical research and benefited from technological advances that permitted the operation of PET and MR in close physical proximity, for example. This review covers milestones of the development of various hybrid imaging systems for use in clinical practice and small-animal research. Special attention is given to technological advances that helped the adoption of hybrid imaging, as well as to introducing methodological concepts that benefit from the availability of complementary anatomical and biological information, such as new types of image reconstruction and data correction schemes. The ultimate goal of hybrid imaging is to provide useful, complementary and quantitative information during patient work-up. Hybrid imaging also opens the door to multi-parametric assessment of diseases, which will help us better understand the causes of various diseases that currently contribute to a large fraction of healthcare costs.

  4. Hybrid Imaging: Instrumentation and Data Processing

    Directory of Open Access Journals (Sweden)

    Jacobo Cal-Gonzalez

    2018-05-01

    Full Text Available State-of-the-art patient management frequently requires the use of non-invasive imaging methods to assess the anatomy, function or molecular-biological conditions of patients or study subjects. Such imaging methods can be singular, providing either anatomical or molecular information, or they can be combined, thus, providing “anato-metabolic” information. Hybrid imaging denotes image acquisitions on systems that physically combine complementary imaging modalities for an improved diagnostic accuracy and confidence as well as for increased patient comfort. The physical combination of formerly independent imaging modalities was driven by leading innovators in the field of clinical research and benefited from technological advances that permitted the operation of PET and MR in close physical proximity, for example. This review covers milestones of the development of various hybrid imaging systems for use in clinical practice and small-animal research. Special attention is given to technological advances that helped the adoption of hybrid imaging, as well as to introducing methodological concepts that benefit from the availability of complementary anatomical and biological information, such as new types of image reconstruction and data correction schemes. The ultimate goal of hybrid imaging is to provide useful, complementary and quantitative information during patient work-up. Hybrid imaging also opens the door to multi-parametric assessment of diseases, which will help us better understand the causes of various diseases that currently contribute to a large fraction of healthcare costs.

  5. The South African Astronomical Observatory

    International Nuclear Information System (INIS)

    1989-01-01

    The research work discussed in this report covers a wide range, from work on the nearest stars to studies of the distant quasars, and the astronomers who have carried out this work come from universities and observatories spread around the world as well as from South African universities and from the South African Astronomical Observatory (SAAO) staff itself. A characteristic of much of this work has been its collaborative character. SAAO studies in 1989 included: supernovae 1987A; galaxies; ground-based observations of celestial x-ray sources; the Magellanic Clouds; pulsating variables; galactic structure; binary star phenomena; the provision of photometric standards; nebulous matter; stellar astrophysics, and astrometry

  6. Comparative study of image restoration techniques in forensic image processing

    Science.gov (United States)

    Bijhold, Jurrien; Kuijper, Arjan; Westhuis, Jaap-Harm

    1997-02-01

    In this work we investigated the forensic applicability of some state-of-the-art image restoration techniques for digitized video-images and photographs: classical Wiener filtering, constrained maximum entropy, and some variants of constrained minimum total variation. Basic concepts and experimental results are discussed. Because all methods appeared to produce different results, a discussion is given of which method is the most suitable, depending on the image objects that are questioned, prior knowledge and type of blur and noise. Constrained minimum total variation methods produced the best results for test images with simulated noise and blur. In cases where images are the most substantial part of the evidence, constrained maximum entropy might be more suitable, because its theoretical basis predicts a restoration result that shows the most likely pixel values, given all the prior knowledge used during restoration.

  7. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan

    2015-04-01

    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  8. Current status on image processing in medical fields in Japan

    International Nuclear Information System (INIS)

    Atsumi, Kazuhiko

    1979-01-01

    Information on medical images are classified in the two patterns. 1) off-line images on films-x-ray films, cell image, chromosome image etc. 2) on-line images detected through sensors, RI image, ultrasonic image, thermogram etc. These images are divided into three characteristic, two dimensional three dimensional and dynamic images. The research on medical image processing have been reported in several meeting in Japan and many fields on images have been studied on RI, thermogram, x-ray film, x-ray-TV image, cancer cell, blood cell, bacteria, chromosome, ultrasonics, and vascular image. Processing on TI image useful and easy because of their digital displays. Software on smoothing, restoration (iterative approximation), fourier transformation, differentiation and subtration. Image on stomach and chest x-ray films have been processed automatically utilizing computer system. Computed Tomography apparatuses have been already developed in Japan and automated screening instruments on cancer cells and recently on blood cells classification have been also developed. Acoustical holography imaging and moire topography have been also studied in Japan. (author)

  9. Fake currency detection using image processing

    Science.gov (United States)

    Agasti, Tushar; Burand, Gajanan; Wade, Pratik; Chitra, P.

    2017-11-01

    The advancement of color printing technology has increased the rate of fake currency note printing and duplicating the notes on a very large scale. Few years back, the printing could be done in a print house, but now anyone can print a currency note with maximum accuracy using a simple laser printer. As a result the issue of fake notes instead of the genuine ones has been increased very largely. India has been unfortunately cursed with the problems like corruption and black money. And counterfeit of currency notes is also a big problem to it. This leads to design of a system that detects the fake currency note in a less time and in a more efficient manner. The proposed system gives an approach to verify the Indian currency notes. Verification of currency note is done by the concepts of image processing. This article describes extraction of various features of Indian currency notes. MATLAB software is used to extract the features of the note. The proposed system has got advantages like simplicity and high performance speed. The result will predict whether the currency note is fake or not.

  10. Viewpoints on Medical Image Processing: From Science to Application.

    Science.gov (United States)

    Deserno Né Lehmann, Thomas M; Handels, Heinz; Maier-Hein Né Fritzsche, Klaus H; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-05-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.

  11. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  12. Viewpoints on Medical Image Processing: From Science to Application

    Science.gov (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  13. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David; Gereige, Issam; Gourgon, Cé cile

    2013-01-01

    patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications

  14. The accidental astronomer

    Indian Academy of Sciences (India)

    Lawrence

    Research Centre (BARC), Tata Institute of Fundamental Research ... to make my Ph.D. a stressful, lengthy process, but I persisted and ... An American graduate student, as- ... prospects of getting a job in Bhubaneswar or its vicinity, as as-.

  15. Image processing techniques for digital orthophotoquad production

    Science.gov (United States)

    Hood, Joy J.; Ladner, L. J.; Champion, Richard A.

    1989-01-01

    Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.

  16. Numerical methods in image processing for applications in jewellery industry

    OpenAIRE

    Petrla, Martin

    2016-01-01

    Presented thesis deals with a problem from the field of image processing for application in multiple scanning of jewelery stones. The aim is to develop a method for preprocessing and subsequent mathematical registration of images in order to increase the effectivity and reliability of the output quality control. For these purposes the thesis summerizes mathematical definition of digital image as well as theoretical base of image registration. It proposes a method adjusting every single image ...

  17. Advances in the Application of Image Processing Fruit Grading

    OpenAIRE

    Fang , Chengjun; Hua , Chunjian

    2013-01-01

    International audience; In the perspective of actual production, the paper presents the advances in the application of image processing fruit grading from several aspects, such as processing precision and processing speed of image processing technology. Furthermore, the different algorithms about detecting size, shape, color and defects are combined effectively to reduce the complexity of each algorithm and achieve a balance between the processing precision and processing speed are keys to au...

  18. Evaluation of processing methods for static radioisotope scan images

    International Nuclear Information System (INIS)

    Oakberg, J.A.

    1976-12-01

    Radioisotope scanning in the field of nuclear medicine provides a method for the mapping of a radioactive drug in the human body to produce maps (images) which prove useful in detecting abnormalities in vital organs. At best, radioisotope scanning methods produce images with poor counting statistics. One solution to improving the body scan images is using dedicated small computers with appropriate software to process the scan data. Eleven methods for processing image data are compared

  19. Digital image processing in NDT : Application to industrial radiography

    International Nuclear Information System (INIS)

    Aguirre, J.; Gonzales, C.; Pereira, D.

    1988-01-01

    Digital image processing techniques are applied to image enhancement discontinuity detection and characterization is radiographic test. Processing is performed mainly by image histogram modification, edge enhancement, texture and user interactive segmentation. Implementation was achieved in a microcomputer with video image capture system. Results are compared with those obtained through more specialized equipment main frame computers and high precision mechanical scanning digitisers. Procedures are intended as a precious stage for automatic defect detection

  20. An integral design strategy combining optical system and image processing to obtain high resolution images

    Science.gov (United States)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  1. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  2. Effects of image processing on the detective quantum efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na [Yonsei University, Wonju (Korea, Republic of)

    2010-02-15

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  3. Effects of image processing on the detective quantum efficiency

    International Nuclear Information System (INIS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-01-01

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  4. Astronomical Spectroscopy A Short History

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 5. Astronomical Spectroscopy A Short History. J C Bhattacharyya. General Article Volume 3 Issue 5 May 1998 pp 24-29. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/003/05/0024-0029 ...

  5. Signal Processing in Medical Ultrasound B-mode Imaging

    International Nuclear Information System (INIS)

    Song, Tai Kyong

    2000-01-01

    Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing

  6. An invertebrate embryologist's guide to routine processing of confocal images.

    Science.gov (United States)

    von Dassow, George

    2014-01-01

    It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.

  7. The Study of Image Processing Method for AIDS PA Test

    International Nuclear Information System (INIS)

    Zhang, H J; Wang, Q G

    2006-01-01

    At present, the main test technique of AIDS is PA in China. Because the judgment of PA test image is still depending on operator, the error ration is high. To resolve this problem, we present a new technique of image processing, which first process many samples and get the data including coordinate of center and the rang of kinds images; then we can segment the image with the data; at last, the result is exported after data was judgment. This technique is simple and veracious; and it also turns out to be suitable for the processing and analyzing of other infectious diseases' PA test image

  8. High angular resolution diffusion imaging : processing & visualization

    NARCIS (Netherlands)

    Prckovska, V.

    2010-01-01

    Diffusion tensor imaging (DTI) is a recent magnetic resonance imaging (MRI) technique that can map the orientation architecture of neural tissues in a completely non-invasive way by measuring the directional specificity (anisotropy) of the local water diffusion. However, in areas of complex fiber

  9. Latin American astronomers and the International Astronomical Union

    Science.gov (United States)

    Torres-Peimbert, S.

    2017-07-01

    Selected aspects of the participation of the Latin American astronomers in the International Astronomical Union are presented: Membership, Governing bodies, IAU meetings, and other activities. The Union was founded in 1919 with 7 initial member states, soon to be followed by Brazil. In 1921 Mexico joined, and in 1928 Argentina also formed part of the Union, while Chile joined in 1947. In 1961 Argentina, Brazil, Chile, Mexico and Venezuela were already member countries. At present (October 2016) 72 countries contribute financially to the Union. The Union lists 12,391 professional astronomers as individual members; of those, 692 astronomers work in Latin America and the Caribbean, from 13 member states (Argentina, Bolivia , Brazil, Chile, Colombia, Costa Rica, Cuba, Honduras, Mexico, Panamá, Perú, Uruguay and Venezuela) as well as from Ecuador and Puerto Rico. This group comprises 5.58% of the total membership, a figure somewhat lower than the fraction of the population in the region, which is 8.6% of the world population. Of the Latin American members, 23.4% are women and 76.6% are men; slightly higher than the whole membership of Union, which is of 16.9%. In the governing bodies it can be mentioned that there have been 2 Presidents of the Union (Jorge Sahade and Silvia Torres-Peimbert), 7 VicePresidents (Guillermo Haro, Jorge Sahade, Manuel Peimbert Claudio Anguita, Silvia Torres-Peimbert, Beatriz Barbuy, and Marta G. Rovira). The IAU meetings held in the region, include 2 General Assemblies (the 1991 XXI GA took place in Buenos Aires, Argentina and the 2009 XXVIII GA, in Rio de Janeiro, Brazil), 15 Regional Meetings (in Argentina, Brazil, Chile, Colombia, Mexico, Venezuela and Uruguay), 29 Symposia (in Argentina, Brazil, Chile, Colombia, Costa Rica, Ecuador, Peru and Mexico), 5 Colloquia (in Argentina and Mexico), 8 International Schools for Young Astronomers (in Argentina, Brazil, Cuba, Honduras and Mexico), and 11 projects sponsored by the Office of Astronomy

  10. Medical image processing on the GPU - past, present and future.

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  12. Image Segmentation and Processing for Efficient Parking Space Analysis

    OpenAIRE

    Tutika, Chetan Sai; Vallapaneni, Charan; R, Karthik; KP, Bharath; Muthu, N Ruban Rajesh Kumar

    2018-01-01

    In this paper, we develop a method to detect vacant parking spaces in an environment with unclear segments and contours with the help of MATLAB image processing capabilities. Due to the anomalies present in the parking spaces, such as uneven illumination, distorted slot lines and overlapping of cars. The present-day conventional algorithms have difficulties processing the image for accurate results. The algorithm proposed uses a combination of image pre-processing and false contour detection ...

  13. PARAGON-IPS: A Portable Imaging Software System For Multiple Generations Of Image Processing Hardware

    Science.gov (United States)

    Montelione, John

    1989-07-01

    Paragon-IPS is a comprehensive software system which is available on virtually all generations of image processing hardware. It is designed for an image processing department or a scientist and engineer who is doing image processing full-time. It is being used by leading R&D labs in government agencies and Fortune 500 companies. Applications include reconnaissance, non-destructive testing, remote sensing, medical imaging, etc.

  14. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  15. The astronomical tables of Giovanni Bianchini

    CERN Document Server

    Chabas, Jose

    2009-01-01

    This book describes and analyses, for the first time, the astronomical tables of Giovanni Bianchini of Ferrara (d. after 1469), explains their context, inserts them into an astronomical tradition that began in Toledo, and addresses their diffusion.

  16. Algorithms of image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Oliveira, V.A.

    1990-01-01

    The problem of image restoration from noisy measurements as encountered in Nuclear Medicine is considered. A new approach for treating the measurements wherein they are represented by a spatial noncausal interaction model prior to maximum entropy restoration is given. This model describes the statistical dependence among the image values and their neighbourhood. The particular application of the algorithms presented here relates to gamma ray imaging systems, and is aimed at improving the resolution-noise suppression product. Results for actual gamma camera data are presented and compared with more conventional techniques. (author)

  17. Algorithms for classification of astronomical object spectra

    Science.gov (United States)

    Wasiewicz, P.; Szuppe, J.; Hryniewicz, K.

    2015-09-01

    Obtaining interesting celestial objects from tens of thousands or even millions of recorded optical-ultraviolet spectra depends not only on the data quality but also on the accuracy of spectra decomposition. Additionally rapidly growing data volumes demands higher computing power and/or more efficient algorithms implementations. In this paper we speed up the process of substracting iron transitions and fitting Gaussian functions to emission peaks utilising C++ and OpenCL methods together with the NOSQL database. In this paper we implemented typical astronomical methods of detecting peaks in comparison to our previous hybrid methods implemented with CUDA.

  18. Astronomical Heritage in the National Culture

    Science.gov (United States)

    Harutyunian, H. A.; Mickaelian, A. M.; Parsamian, E. S.

    2014-10-01

    The book contains Proceedings of the Archaeoastronomical Meeting "Astronomical Heritage in the National Culture" Dedicated to Anania Shirakatsi's 1400th Anniversary and XI Annual Meeting of the Armenian Astronomical Society. It consists of 3 main sections: "Astronomical Heritage", "Anania Shirakatsi" and "Modern Astronomy", as well as Literature about Anania Shirakatsi is included. The book may be interesting for astronomers, historians, archaeologists, linguists, students and other readers.

  19. High-performance method of morphological medical image processing

    Directory of Open Access Journals (Sweden)

    Ryabykh M. S.

    2016-07-01

    Full Text Available the article shows the implementation of grayscale morphology vHGW algorithm for selection borders in the medical image. Image processing is executed using OpenMP and NVIDIA CUDA technology for images with different resolution and different size of the structuring element.

  20. Measurement and Image Processing Techniques for Particle Image Velocimetry Using Solid-Phase Carbon Dioxide

    Science.gov (United States)

    2014-03-27

    stereoscopic PIV: the angular displacement configuration and the translation configuration. The angular displacement configuration is most commonly used today...images were processed using ImageJ, an open-source, Java -based image processing software available from the National Institute of Health (NIH). The

  1. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  2. Use of personal computer image for processing a magnetic resonance image (MRI)

    International Nuclear Information System (INIS)

    Yamamoto, Tetsuo; Tanaka, Hitoshi

    1988-01-01

    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  3. Sketching the moon an astronomical artist's guide

    CERN Document Server

    Handy, Richard; McCague, Thomas; Rix, Erika; Russell, Sally

    2012-01-01

    Soon after you begin studying the sky through your small telescope or binoculars, you will probably be encouraged by others to make sketches of what you see. Sketching is a time-honored tradition in amateur astronomy and dates back to the earliest times, when telescopes were invented. Even though we have lots of new imaging technologies nowadays, including astrophotography, most observers still use sketching to keep a record of what they see, make them better observers, and in hopes of perhaps contributing something to the body of scientific knowledge about the Moon. Some even sketch because it satisfies their artistic side. The Moon presents some unique challenges to the astronomer-artist, the Moon being so fond of tricks of the light. Sketching the Moon: An Astronomical Artist’s Guide, by five of the best lunar observer-artists working today, will guide you along your way and help you to achieve really high-quality sketches. All the major types of lunar features are covered, with a variety of sketching te...

  4. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  5. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  6. ALOHA—Astronomical Light Optical Hybrid Analysis - From experimental demonstrations to a MIR instrument proposal

    Science.gov (United States)

    Lehmann, L.; Darré, P.; Szemendera, L.; Gomes, J. T.; Baudoin, R.; Ceus, D.; Brustlein, S.; Delage, L.; Grossard, L.; Reynaud, F.

    2018-04-01

    This paper gives an overview of the Astronomical Light Optical Hybrid Analysis (ALOHA) project dedicated to investigate a new method for high resolution imaging in mid infrared astronomy. This proposal aims to use a non-linear frequency conversion process to shift the thermal infrared radiation to a shorter wavelength domain compatible with proven technology such as guided optics and detectors. After a description of the principle, we summarise the evolution of our study from the high flux seminal experiments to the latest results in the photon counting regime.

  7. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  8. Apparatus and method X-ray image processing

    International Nuclear Information System (INIS)

    1984-01-01

    The invention relates to a method for X-ray image processing. The radiation passed through the object is transformed into an electric image signal from which the logarithmic value is determined and displayed by a display device. Its main objective is to provide a method and apparatus that renders X-ray images or X-ray subtraction images with strong reduction of stray radiation. (Auth.)

  9. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego

    2017-01-01

    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  10. Embedded processor extensions for image processing

    Science.gov (United States)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  11. Statistical processing of large image sequences.

    Science.gov (United States)

    Khellah, F; Fieguth, P; Murray, M J; Allen, M

    2005-01-01

    The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.

  12. National Astronomical Observatory of Japan

    CERN Document Server

    Haubold, Hans J; UN/ESA/NASA Workshop on the International Heliophysical Year 2007 and Basic Space Science, hosted by the National Astronomical Observatory of Japan

    2010-01-01

    This book represents Volume II of the Proceedings of the UN/ESA/NASA Workshop on the International Heliophysical Year 2007 and Basic Space Science, hosted by the National Astronomical Observatory of Japan, Tokyo, 18 - 22 June, 2007. It covers two programme topics explored in this and past workshops of this nature: (i) non-extensive statistical mechanics as applicable to astrophysics, addressing q-distribution, fractional reaction and diffusion, and the reaction coefficient, as well as the Mittag-Leffler function and (ii) the TRIPOD concept, developed for astronomical telescope facilities. The companion publication, Volume I of the proceedings of this workshop, is a special issue in the journal Earth, Moon, and Planets, Volume 104, Numbers 1-4, April 2009.

  13. Astronomical Data and Information Visualization

    Science.gov (United States)

    Goodman, Alyssa A.

    2010-01-01

    As the size and complexity of data sets increases, the need to "see" them more clearly increases as well. In the past, many scientists saw "fancy" data and information visualization as necessary for "outreach," but not for research. In this talk, I wlll demonstrate, using specific examples, why more and more scientists--not just astronomers--are coming to rely upon the development of new visualization strategies not just to present their data, but to understand it. Principal examples will be drawn from the "Astronomical Medicine" project at Harvard's Initiative in Innovative Computing, and from the "Seamless Astronomy" effort, which is co-sponsored by the VAO (NASA/NSF) and Microsoft Research.

  14. Astronomical optics and elasticity theory

    CERN Document Server

    Lemaitre, Gerard Rene

    2008-01-01

    Astronomical Optics and Elasticity Theory provides a very thorough and comprehensive account of what is known in this field. After an extensive introduction to optics and elasticity, the book discusses variable curvature and multimode deformable mirrors, as well as, in depth, active optics, its theory and applications. Further, optical design utilizing the Schmidt concept and various types of Schmidt correctors, as well as the elasticity theory of thin plates and shells are elaborated upon. Several active optics methods are developed for obtaining aberration corrected diffraction gratings. Further, a weakly conical shell theory of elasticity is elaborated for the aspherization of grazing incidence telescope mirrors. The very didactic and fairly easy-to-read presentation of the topic will enable PhD students and young researchers to actively participate in challenging astronomical optics and instrumentation projects.

  15. The operation technology of realtime image processing system (Datacube)

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Lee, Yong Bum; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Park, Jin Seok

    1997-02-01

    In this project, a Sparc VME-based MaxSparc system, running the solaris operating environment, is selected as the dedicated image processing hardware for robot vision applications. In this report, the operation of Datacube maxSparc system, which is high performance realtime image processing hardware, is systematized. And image flow example programs for running MaxSparc system are studied and analyzed. The state-of-the-arts of Datacube system utilizations are studied and analyzed. For the next phase, advanced realtime image processing platform for robot vision application is going to be developed. (author). 19 refs., 71 figs., 11 tabs.

  16. Automated processing of X-ray images in medicine

    International Nuclear Information System (INIS)

    Babij, Ya.S.; B'yalyuk, Ya.O.; Yanovich, I.A.; Lysenko, A.V.

    1991-01-01

    Theoretical and practical achievements in application of computing technology means for processing of X-ray images in medicine were generalized. The scheme of the main directions and tasks of processing of X-ray images was given and analyzed. The principal problems appeared in automated processing of X-ray images were distinguished. It is shown that for interpretation of X-ray images it is expedient to introduce a notion of relative operating characteristic (ROC) of a roentgenologist. Every point on ROC curve determines the individual criteria of roentgenologist to put a positive diagnosis for definite situation

  17. The vision guidance and image processing of AGV

    Science.gov (United States)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  18. Image processing for drift compensation in fluorescence microscopy

    DEFF Research Database (Denmark)

    Petersen, Steffen; Thiagarajan, Viruthachalam; Coutinho, Isabel

    2013-01-01

    Fluorescence microscopy is characterized by low background noise, thus a fluorescent object appears as an area of high signal/noise. Thermal gradients may result in apparent motion of the object, leading to a blurred image. Here, we have developed an image processing methodology that may remove....../reduce blur significantly for any type of microscopy. A total of ~100 images were acquired with a pixel size of 30 nm. The acquisition time for each image was approximately 1second. We can quantity the drift in X and Y using the sub pixel accuracy computed centroid location of an image object in each frame....... We can measure drifts down to approximately 10 nm in size and a drift-compensated image can therefore be reconstructed on a grid of the same size using the “Shift and Add” approach leading to an image of identical size asthe individual image. We have also reconstructed the image using a 3 fold larger...

  19. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  20. Some results of automatic processing of images

    International Nuclear Information System (INIS)

    Golenishchev, I.A.; Gracheva, T.N.; Khardikov, S.V.

    1975-01-01

    The problems of automatic deciphering of the radiographic picture the purpose of which is making a conclusion concerning the quality of the inspected product on the basis of the product defect images in the picture are considered. The methods of defect image recognition are listed, and the algorithms and the class features of defects are described. The results of deciphering of a small radiographic picture by means of the ''Minsk-22'' computer are presented. It is established that the sensitivity of the method of the automatic deciphering is close to that obtained for visual deciphering

  1. Astronomical Research Institute Photometric Results

    Science.gov (United States)

    Linder, Tyler R.; Sampson, Ryan; Holmes, Robert

    2013-01-01

    The Astronomical Research Institute (ARI) conducts astrometric and photometric studies of asteroids with a concentration on near-Earth objects (NEOs). A 0.76-m autoscope was used for photometric studies of seven asteroids of which two were main-belt targets and five were NEOs, including one potentially hazardous asteroid (PHA). These objects are: 3122 Florence, 3960 Chaliubieju, 5143 Heracles, (6455) 1992 HE, (36284) 2000 DM8, (62128) 2000 SO1, and 2010 LF86.

  2. The South African Astronomical Observatory

    International Nuclear Information System (INIS)

    1988-01-01

    The geographical position, climate and equipment at the South African Astronomical Observatory (SAAO), together with the enthusiasm and efforts of SAAO scientific and technical staff and of visiting scientists, have enabled the Observatory to make a major contribution to the fields of astrophysics and cosmology. During 1987 the SAAO has been involved in studies of the following: supernovae; galaxies, including Seyfert galaxies; celestial x-ray sources; magellanic clouds; pulsating variables; galatic structure; binary star phenomena; nebulae; interstellar matter and stellar astrophysics

  3. Surface Distresses Detection of Pavement Based on Digital Image Processing

    OpenAIRE

    Ouyang , Aiguo; Luo , Chagen; Zhou , Chao

    2010-01-01

    International audience; Pavement crack is the main form of early diseases of pavement. The use of digital photography to record pavement images and subsequent crack detection and classification has undergone continuous improvements over the past decade. Digital image processing has been applied to detect the pavement crack for its advantages of large amount of information and automatic detection. The applications of digital image processing in pavement crack detection, distresses classificati...

  4. Bayesian image processing in two and three dimensions

    International Nuclear Information System (INIS)

    Hart, H.; Liang, Z.

    1986-01-01

    Tomographic image processing customarily analyzes data acquired over a series of projective orientations. If, however, the point source function (the matrix R) of the system is strongly depth dependent, tomographic information is also obtainable from a series of parallel planar images corresponding to different ''focal'' depths. Bayesian image processing (BIP) was carried out for two and three dimensional spatially uncorrelated discrete amplitude a priori source distributions

  5. The study of image processing of parallel digital signal processor

    International Nuclear Information System (INIS)

    Liu Jie

    2000-01-01

    The author analyzes the basic characteristic of parallel DSP (digital signal processor) TMS320C80 and proposes related optimized image algorithm and the parallel processing method based on parallel DSP. The realtime for many image processing can be achieved in this way

  6. IDAPS (Image Data Automated Processing System) System Description

    Science.gov (United States)

    1988-06-24

    This document describes the physical configuration and components used in the image processing system referred to as IDAPS (Image Data Automated ... Processing System). This system was developed by the Environmental Research Institute of Michigan (ERIM) for Eglin Air Force Base. The system is designed

  7. A fuzzy art neural network based color image processing and ...

    African Journals Online (AJOL)

    To improve the learning process from the input data, a new learning rule was suggested. In this paper, a new method is proposed to deal with the RGB color image pixels, which enables a Fuzzy ART neural network to process the RGB color images. The application of the algorithm was implemented and tested on a set of ...

  8. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  9. Digital Data Processing of Images | Lotter | South African Medical ...

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  10. Astronomical publications of Melbourne Observatory

    Science.gov (United States)

    Andropoulos, Jenny Ioanna

    2014-05-01

    During the second half of the 19th century and the first half of the 20th century, four well-equipped government observatories were maintained in Australia - in Melbourne, Sydney, Adelaide and Perth. These institutions conducted astronomical observations, often in the course of providing a local time service, and they also collected and collated meteorological data. As well, some of these observatories were involved at times in geodetic surveying, geomagnetic recording, gravity measurements, seismology, tide recording and physical standards, so the term "observatory" was being used in a rather broad sense! Despite the international renown that once applied to Williamstown and Melbourne Observatories, relatively little has been written by modern-day scholars about astronomical activities at these observatories. This research is intended to rectify this situation to some extent by gathering, cataloguing and analysing the published astronomical output of the two Observatories to see what contributions they made to science and society. It also compares their contributions with those of Sydney, Adelaide and Perth Observatories. Overall, Williamstown and Melbourne Observatories produced a prodigious amount of material on astronomy in scientific and technical journals, in reports and in newspapers. The other observatories more or less did likewise, so no observatory of those studied markedly outperformed the others in the long term, especially when account is taken of their relative resourcing in staff and equipment.

  11. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Science.gov (United States)

    Della Mea, Vincenzo; Baroni, Giulia L; Pilutti, David; Di Loreto, Carla

    2017-01-01

    The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  12. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  13. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  15. Graphical user interface for image acquisition and processing

    Science.gov (United States)

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  16. Image processing. Volumetric analysis with a digital image processing system. [GAMMA]. Bildverarbeitung. Volumetrie mittels eines digitalen Bildverarbeitungssystems

    Energy Technology Data Exchange (ETDEWEB)

    Kindler, M; Radtke, F; Demel, G

    1986-01-01

    The book is arranged in seven sections, describing various applications of volumetric analysis using image processing systems, and various methods of diagnostic evaluation of images obtained by gamma scintigraphy, cardic catheterisation, and echocardiography. A dynamic ventricular phantom is explained that has been developed for checking and calibration for safe examination of patient, the phantom allowing extensive simulation of volumetric and hemodynamic conditions of the human heart: One section discusses the program development for image processing, referring to a number of different computer systems. The equipment described includes a small non-expensive PC system, as well as a standardized nuclear medical diagnostic system, and a computer system especially suited to image processing.

  17. Acquisition and Post-Processing of Immunohistochemical Images.

    Science.gov (United States)

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  18. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  19. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  20. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  1. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  2. Conducting Original, Hands-On Astronomical Research in the Classroom

    Science.gov (United States)

    Corneau, M. J.

    2009-12-01

    teachers to convey moderately complex computer science, optical, geographic, mathematical, informational and physical principles through hands-on telescope operations. In addition to the general studies aspects of classroom internet-based astronomy, Tzec Maun supports real science by enabling operators precisely point telescopes and acquire extremely faint, magnitude 19+ CCD images. Thanks to the creative Team of Photometrica (photometrica.org), my teams now have the ability to process and analyze images online and produce results in short order. Normally, astronomical data analysis packages cost greater than thousands of dollars for single license operations. Free to my team members, Photometrica allows students to upload their data to a cloud computing server and read precise photometric and/or astrometric results. I’m indebted to Michael and Geir for their support. The efficacy of student-based research is well documented. The Council on Undergraduate Research defines student research as, "an inquiry or investigation conducted by an undergraduate that makes an original intellectual or creative contribution to the discipline." (http://serc.carleton.edu/introgeo/studentresearch/What. Teaching from Tzec Maun in the classroom is the most original teaching research I can imagine. I very much look forward to presenting this program to the convened body.

  3. Fundamental and applied aspects of astronomical seeing

    International Nuclear Information System (INIS)

    Coulman, C.E.

    1985-01-01

    It is pointed out that despite recent advances in the use of spacecraft as observatory platforms, much astronomy is still conducted from the surface of the earth. The literature on astronomical seeing and observatory site selection is widely scattered throughout journals and conference reports concerned with various disciplines. This survey has the objective to represent the state of the subject up to 1982. A description of the history and prospects of the considered subject is presented, and the optics of seeing are examined. The meteorology of seeing is discussed, taking into account aspects of micrometeorology and small-scale turbulence near the surface, the diurnal cycle in the planetary boundary layer, the temperature structure above the planetary boundary layer, and the effects of terrain. Attention is given to the calculation of system performance from microthermal data, optical methods for the measurement of seeing, and techniques for minimizing image-degrading effects of the atmosphere. 279 references

  4. Diagnosis of skin cancer using image processing

    Science.gov (United States)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel

    2014-10-01

    In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.

  5. Theoretical analysis of radiographic images by nonstationary Poisson processes

    International Nuclear Information System (INIS)

    Tanaka, Kazuo; Uchida, Suguru; Yamada, Isao.

    1980-01-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process. (author)

  6. Image processing tensor transform and discrete tomography with Matlab

    CERN Document Server

    Grigoryan, Artyom M

    2012-01-01

    Focusing on mathematical methods in computer tomography, Image Processing: Tensor Transform and Discrete Tomography with MATLAB(R) introduces novel approaches to help in solving the problem of image reconstruction on the Cartesian lattice. Specifically, it discusses methods of image processing along parallel rays to more quickly and accurately reconstruct images from a finite number of projections, thereby avoiding overradiation of the body during a computed tomography (CT) scan. The book presents several new ideas, concepts, and methods, many of which have not been published elsewhere. New co

  7. Computer vision applications for coronagraphic optical alignment and image processing.

    Science.gov (United States)

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  8. Roles of medical image processing in medical physics

    International Nuclear Information System (INIS)

    Arimura, Hidetaka

    2011-01-01

    Image processing techniques including pattern recognition techniques play important roles in high precision diagnosis and radiation therapy. The author reviews a symposium on medical image information, which was held in the 100th Memorial Annual Meeting of the Japan Society of Medical Physics from September 23rd to 25th. In this symposium, we had three invited speakers, Dr. Akinobu Shimizu, Dr. Hideaki Haneishi, and Dr. Hirohito Mekata, who are active engineering researchers of segmentation, image registration, and pattern recognition, respectively. In this paper, the author reviews the roles of the medical imaging processing in medical physics field, and the talks of the three invited speakers. (author)

  9. Suitable post processing algorithms for X-ray imaging using oversampled displaced multiple images

    International Nuclear Information System (INIS)

    Thim, J; Reza, S; Nawaz, K; Norlin, B; O'Nils, M; Oelmann, B

    2011-01-01

    X-ray imaging systems such as photon counting pixel detectors have a limited spatial resolution of the pixels, based on the complexity and processing technology of the readout electronics. For X-ray imaging situations where the features of interest are smaller than the imaging system pixel size, and the pixel size cannot be made smaller in the hardware, alternative means of resolution enhancement require to be considered. Oversampling with the usage of multiple displaced images, where the pixels of all images are mapped to a final resolution enhanced image, has proven a viable method of reaching a sub-pixel resolution exceeding the original resolution. The effectiveness of the oversampling method declines with the number of images taken, the sub-pixel resolution increases, but relative to a real reduction of imaging pixel sizes yielding a full resolution image, the perceived resolution from the sub-pixel oversampled image is lower. This is because the oversampling method introduces blurring noise into the mapped final images, and the blurring relative to full resolution images increases with the oversampling factor. One way of increasing the performance of the oversampling method is by sharpening the images in post processing. This paper focus on characterizing the performance increase of the oversampling method after the use of some suitable post processing filters, for digital X-ray images specifically. The results show that spatial domain filters and frequency domain filters of the same type yield indistinguishable results, which is to be expected. The results also show that the effectiveness of applying sharpening filters to oversampled multiple images increase with the number of images used (oversampling factor), leaving 60-80% of the original blurring noise after filtering a 6 x 6 mapped image (36 images taken), where the percentage is depending on the type of filter. This means that the effectiveness of the oversampling itself increase by using sharpening

  10. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); V. Capasso

    2009-01-01

    htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects

  11. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Capasso, V.

    2009-01-01

    We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video

  12. Astronomers debate diamonds in space

    Science.gov (United States)

    1999-04-01

    called IRAS 16594-4656. Like the others, it's a carbon-rich star now in the process of dying. It has been blasting out huge amounts of material over the last thousand years, becoming enclosed within a shell of dust hundreds of times larger than the Solar System --a structure called a "protoplanetary nebula". It was in this dust -- very cold and therefore invisible to non-infrared telescopes-- that the Spanish group using ISO's SWS and LWS spectrometers detected the signature of the carbonaceous compound, in the form of a broad emission band at the wavelength of 21 micron. "We searched for the compound in twenty candidate stars and only this one had it. It is a real textbook case, with one of the strongest emissions ever detected. It gets us closer to solving the mystery and will help us to understand how the "chemical factories" of the Universe work", says ESA astronomer Pedro Garcia-Lario at the ISO Data Centre in Villafranca, Madrid. His group published their results in the March 10 issue of the Astrophysical Journal. They favour the fullerene option. Fullerenes would get formed during decomposition of the solid carbon grains condensed out of the material emitted by the star. The Canadian group obtained high-resolution ISO spectra of seven other stars in this class, and also detected a weak emission of the carbonaceous compound in a new one. They present their data in the May 11 issue of the Astrophysical Journal Letters. "Diamonds, graphite, coal and fullerenes are different forms of carbon. It is quite possible that the 21 micron feature arises from any one of these forms, although not exactly like they are on Earth", says main author Sun Kwok, at the University of Calgary. His group detected the carbonaceous compound a decade ago, for the first time, with the earlier infrared satellite IRAS. Meanwhile, results from the French group led by Louis d'Hendecourt, at the Institut d'Astrophysique Spatiale, in Paris, are adding to the debate. They isolated very tiny

  13. An Image Processing Approach to Linguistic Translation

    Science.gov (United States)

    Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari

    2011-12-01

    The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.

  14. Detecting jaundice by using digital image processing

    Science.gov (United States)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

    2014-03-01

    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  15. High performance image processing of SPRINT

    Energy Technology Data Exchange (ETDEWEB)

    DeGroot, T. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  16. Effects of optimization and image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Kheddache, S.; Maansson, L.G.; Angelhed, J.E.; Denbratt, L.; Gottfridsson, B.; Schlossman, D.

    1991-01-01

    A digital system for chest radiography based on a large image intensifier was compared to a conventional film-screen system. The digital system was optimized with regard to spatial and contrast resolution and dose. The images were digitally processed for contrast and edge enhancement. A simulated pneumothorax and two and two simulated nodules were positioned over the lungs and the mediastinum of an anthro-pomorphic phantom. Observer performance was evaluated with Receiver Operating Characteristic (ROC) analysis. Five observers assessed the processed digital images and the conventional full-size radiographs. The time spent viewing the full-size radiographs and the digital images was recorded. For the simulated pneumothorax, the results showed perfect performance for the full-size radiographs and detectability was high also for the processed digital images. No significant differences in the detectability of the simulated nodules was seen between the two imaging systems. The results for the digital images showed a significantly improved detectability for the nodules in the mediastinum as compared to a previous ROC study where no optimization and image processing was available. No significant difference in detectability was seen between the former and the present ROC study for small nodules in the lung. No difference was seen in the time spent assessing the conventional full-size radiographs and the digital images. The study indicates that processed digital images produced by a large image intensifier are equal in image quality to conventional full-size radiographs for low-contrast objects such as nodules. (author). 38 refs.; 4 figs.; 1 tab

  17. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  18. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  19. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  20. Poisson point processes imaging, tracking, and sensing

    CERN Document Server

    Streit, Roy L

    2010-01-01

    This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.

  1. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  2. TMT in the Astronomical Landscape of the 2020s

    Science.gov (United States)

    Dickinson, Mark; Inami, Hanae

    2014-07-01

    Thirty Meter Telescope Observatory and NOAO will host the second TMT Science Forum at Loews Ventana Canyon Resort in Tucson, Arizona. The TMT Science Forum is an an annual gathering of astronomers, educators, and observatory staff, who meet to explore TMT science, instrumentation, observatory operations, archiving and data processing, astronomy education, and science, technology, engineering, and math (STEM) issues. It is an opportunity for astronomers from the international TMT partners and from the US-at-large community to learn about the observatory status, discuss and plan cutting-edge science, establish collaborations, and to help shape the future of TMT. One important theme for this year's Forum will be the synergy between TMT and other facilities in the post-2020 astronomical landscape. There will be plenary sessions, an instrumentation workshop, topical science sessions and meetings of the TMT International Science Development Teams (ISDTs).

  3. The Pan-STARRS PS1 Image Processing Pipeline

    Science.gov (United States)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  4. Signal and image processing for monitoring and testing at EDF

    International Nuclear Information System (INIS)

    Georgel, B.; Garreau, D.

    1992-04-01

    The quality of monitoring and non destructive testing devices in plants and utilities today greatly depends on the efficient processing of signal and image data. In this context, signal or image processing techniques, such as adaptive filtering or detection or 3D reconstruction, are required whenever manufacturing nonconformances or faulty operation have to be recognized and identified. This paper reviews the issues of industrial image and signal processing, by briefly considering the relevant studies and projects under way at EDF. (authors). 1 fig., 11 refs

  5. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG

    2016-02-01

    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  6. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  7. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    Science.gov (United States)

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of

  8. Focus on astronomical predictable events

    DEFF Research Database (Denmark)

    Jacobsen, Aase Roland

    2006-01-01

    At the Steno Museum Planetarium we have for many occasions used a countdown clock to get focus om astronomical events. A countdown clock can provide actuality to predictable events, for example The Venus Transit, Opportunity landing on Mars and The Solar Eclipse. The movement of the clock attracs...... the public and makes a point of interest in a small exhibit area. A countdown clock can be simple, but it is possible to expand the concept to an eye-catching part of a museum....

  9. Strasbourg Astronomical Data Center (CDS

    Directory of Open Access Journals (Sweden)

    F Genova

    2013-01-01

    Full Text Available The Centre de Donnees astronomiques de Strasbourg (CDS, created in 1972, has been a pioneer in the dissemination of digital scientific data. Ensuring sustainability for several decades has been a major issue because science and technology evolve continuously and the data flow increases endlessly. The paper briefly describes CDS activities, major services, and its R&D strategy to take advantage of new technologies. The next frontiers for CDS are the new Web 2.0/3.0 paradigm and, at a more general level, global interoperability of astronomical on-line resources in the Virtual Observatory framework.

  10. astroplan: Observation Planning for Astronomers

    Science.gov (United States)

    Morris, Brett

    2016-03-01

    Astroplan is an observation planning package for astronomers. It is an astropy-affiliated package which began as a Google Summer of Code project. Astroplan facilitates convenient calculation of common observational quantities, like target altitudes and azimuths, airmasses, and rise/set times. Astroplan also computes when targets are observable given various extensible observing constraints, for example: within a range of airmasses or altitudes, or at a given separation from the Moon. Astroplan is taught in the undergraduate programming for astronomy class, and enables observational Pre- MAP projects at the University of Washington. In the near future, we plan to implement scheduling capabilities in astroplan on top of the constraints framework.

  11. Digital image processing for radiography in nuclear power plants

    International Nuclear Information System (INIS)

    Heidt, H.; Rose, P.; Raabe, P.; Daum, W.

    1985-01-01

    With the help of digital processing of radiographic images from reactor-components it is possible to increase the security and objectiveness of the evaluation. Several examples of image processing procedures (contrast enhancement, density profiles, shading correction, digital filtering, superposition of images etc.) show the advantages for the visualization and evaluation of radiographs. Digital image processing can reduce some of the restrictions of radiography in nuclear power plants. In addition a higher degree of automation can be cost-saving and increase the quality of radiographic evaluation. The aim of the work performed was to to improve the readability of radiographs for the human observer. The main problem is lack of contrast and the presence of disturbing structures like weld seams. Digital image processing of film radiographs starts with the digitization of the image. Conventional systems use TV-cameras or scanners and provide a dynamic range of 1.5. to 3 density units, which are digitized to 256 grey levels. For the enhancement process it is necessary that the grey level range covers the density range of the important regions of the presented film. On the other hand the grey level coverage should not be wider than necessary to minimize the width of digitization steps. Poor digitization makes flaws and cracks invisible and spoils all further image processing

  12. Processing of hyperspectral medical images applications in dermatology using Matlab

    CERN Document Server

    Koprowski, Robert

    2017-01-01

    This book presents new methods of analyzing and processing hyperspectral medical images, which can be used in diagnostics, for example for dermatological images. The algorithms proposed are fully automatic and the results obtained are fully reproducible. Their operation was tested on a set of several thousands of hyperspectral images and they were implemented in Matlab. The presented source code can be used without licensing restrictions. This is a valuable resource for computer scientists, bioengineers, doctoral students, and dermatologists interested in contemporary analysis methods.

  13. Subband/Transform MATLAB Functions For Processing Images

    Science.gov (United States)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  14. A very high energy imaging for radioactive wastes processing

    International Nuclear Information System (INIS)

    Moulin, V.; Pettier, J.L.

    2004-01-01

    The X imaging occurs at a lot of steps of the radioactive wastes processing: selection for conditioning, physical characterization with a view to radiological characterization, quality control of the product before storage, transport or disposal. Size and volume of the objects considered here necessitate to work with very high energy systems. Here is shown, through some examples, in which conditions this X imaging is carried out as well as the contribution of the obtained images. (O.M.)

  15. Digital Image Processing Overview For Helmet Mounted Displays

    Science.gov (United States)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  16. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Color sensitivity of the multi-exposure HDR imaging process

    Science.gov (United States)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  18. Image recognition on raw and processed potato detection: a review

    Science.gov (United States)

    Qi, Yan-nan; Lü, Cheng-xu; Zhang, Jun-ning; Li, Ya-shuo; Zeng, Zhen; Mao, Wen-hua; Jiang, Han-lu; Yang, Bing-nan

    2018-02-01

    Objective: Chinese potato staple food strategy clearly pointed out the need to improve potato processing, while the bottleneck of this strategy is technology and equipment of selection of appropriate raw and processed potato. The purpose of this paper is to summarize the advanced raw and processed potato detection methods. Method: According to consult research literatures in the field of image recognition based potato quality detection, including the shape, weight, mechanical damage, germination, greening, black heart, scab potato etc., the development and direction of this field were summarized in this paper. Result: In order to obtain whole potato surface information, the hardware was built by the synchronous of image sensor and conveyor belt to achieve multi-angle images of a single potato. Researches on image recognition of potato shape are popular and mature, including qualitative discrimination on abnormal and sound potato, and even round and oval potato, with the recognition accuracy of more than 83%. Weight is an important indicator for potato grading, and the image classification accuracy presents more than 93%. The image recognition of potato mechanical damage focuses on qualitative identification, with the main affecting factors of damage shape and damage time. The image recognition of potato germination usually uses potato surface image and edge germination point. Both of the qualitative and quantitative detection of green potato have been researched, currently scab and blackheart image recognition need to be operated using the stable detection environment or specific device. The image recognition of processed potato mainly focuses on potato chips, slices and fries, etc. Conclusion: image recognition as a food rapid detection tool have been widely researched on the area of raw and processed potato quality analyses, its technique and equipment have the potential for commercialization in short term, to meet to the strategy demand of development potato as

  19. Some aspects of image processing using foams

    International Nuclear Information System (INIS)

    Tufaile, A.; Freire, M.V.; Tufaile, A.P.B.

    2014-01-01

    We have explored some concepts of chaotic dynamics and wave light transport in foams. Using some experiments, we have obtained the main features of light intensity distribution through foams. We are proposing a model for this phenomenon, based on the combination of two processes: a diffusive process and another one derived from chaotic dynamics. We have presented a short outline of the chaotic dynamics involving light scattering in foams. We also have studied the existence of caustics from scattering of light from foams, with typical patterns observed in the light diffraction in transparent films. The nonlinear geometry of the foam structure was explored in order to create optical elements, such as hyperbolic prisms and filters. - Highlights: • We have obtained the light scattering in foams using experiments. • We model the light transport in foams using a chaotic dynamics and a diffusive process. • An optical filter based on foam is proposed

  20. Predictive images of postoperative levator resection outcome using image processing software.

    Science.gov (United States)

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  1. A novel data processing technique for image reconstruction of penumbral imaging

    Science.gov (United States)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  2. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail: M.H.Yap@lboro.ac.uk; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2010-03-15

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  3. Processed images in human perception: A case study in ultrasound breast imaging

    International Nuclear Information System (INIS)

    Yap, Moi Hoon; Edirisinghe, Eran; Bez, Helmut

    2010-01-01

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  4. Infrared Astronomical Satellite (IRAS) Catalogs and Atlases. Explanatory Supplement

    Science.gov (United States)

    Beichman, C. A. (Editor); Neugebauer, G. (Editor); Habing, H. J. (Editor); Clegg, P. E. (Editor); Chester, T. J. (Editor)

    1985-01-01

    The Infrared Astronomical Satellite (IRAS) mission is described. An overview of the mission, a description of the satellite and its telescope system, and a discussion of the mission design, requirements, and inflight modifications are given. Data reduction, flight tests, flux reconstruction and calibration, data processing, and the formats of the IRAS catalogs and atlases are also considered.

  5. Image processing. A system for the automatic sorting of chromosomes

    International Nuclear Information System (INIS)

    Najai, Amor

    1977-01-01

    The present paper deals with two aspects of the system: - an automata (specialized hardware) dedicated to image processing. Images are digitized, divided into sub-units and computations are carried out on their main parameters. - A software for the automatic recognition and sorting of chromosomes is implemented on a Multi-20 minicomputer, connected to the automata. (author) [fr

  6. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  7. Detection of optimum maturity of maize using image processing and ...

    African Journals Online (AJOL)

    A CCD camera for image acquisition of the different green colorations of the maize leaves at maturity was used. Different color features were extracted from the image processing system (MATLAB) and used as inputs to the artificial neural network that classify different levels of maturity. Keywords: Maize, Maturity, CCD ...

  8. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  9. Automatic image processing as a means of safeguarding nuclear material

    International Nuclear Information System (INIS)

    Kahnmeyer, W.; Willuhn, K.; Uebel, W.

    1985-01-01

    Problems involved in computerized analysis of pictures taken by automatic film or video cameras in the context of international safeguards implementation are described. They include technical ones as well as the need to establish objective criteria for assessing image information. In the near future automatic image processing systems will be useful in verifying the identity and integrity of IAEA seals. (author)

  10. Pattern recognition and expert image analysis systems in biomedical image processing (Invited Paper)

    Science.gov (United States)

    Oosterlinck, A.; Suetens, P.; Wu, Q.; Baird, M.; F. M., C.

    1987-09-01

    This paper gives an overview of pattern recoanition techniques (P.R.) used in biomedical image processing and problems related to the different P.R. solutions. Also the use of knowledge based systems to overcome P.R. difficulties, is described. This is illustrated by a common example ofabiomedical image processing application.

  11. Explanatory supplement to the astronomical almanac

    CERN Document Server

    Urban, Sean E

    2013-01-01

    The Explanatory Supplement to the Astronomical Almanac offers explanatory material, supplemental information and detailed descriptions of the computational models and algorithms used to produce The Astronomical Almanac, which is an annual publication prepared jointly by the US Naval Observatory and Her Majesty's Nautical Almanac Office in the UK. Like The Astronomical Almanac, The Explanatory Supplement provides detailed coverage of modern positional astronomy. Chapters are devoted to the celestial and terrestrial reference frames, orbital ephemerides, precession, nutation, Earth rotation, and coordinate transformations. These topics have undergone substantial revisions since the last edition was published. Astronomical positions are intertwined with timescales and relativity in The Astronomical Almanac, so related chapters are provided in The Explanatory Supplement. The Astronomical Almanac also includes information on lunar and solar eclipses, physical ephemerides of solar system bodies, and calendars, so T...

  12. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  13. Parallel Hyperspectral Image Processing on Distributed Multi-Cluster Systems

    NARCIS (Netherlands)

    Liu, F.; Seinstra, F.J.; Plaza, A.J.

    2011-01-01

    Computationally efficient processing of hyperspectral image cubes can be greatly beneficial in many application domains, including environmental modeling, risk/hazard prevention and response, and defense/security. As individual cluster computers often cannot satisfy the computational demands of

  14. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  15. The Digital Microscope and Its Image Processing Utility

    Directory of Open Access Journals (Sweden)

    Tri Wahyu Supardi

    2011-12-01

    Full Text Available Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images of the object being observed. The proposed microscope is constructed from hardware components that can be easily found in Indonesia. The image processing software is capable of performing brightness adjustment, contrast enhancement, histogram equalization, scaling and cropping. The proposed digital microscope has a maximum magnification of 1600x, and image resolution can be varied from 320x240 pixels up to 2592x1944 pixels. The microscope was tested with various objects with a variety of magnification, and image processing was carried out on the image of the object. The results showed that the digital microscope and its image processing system were capable of enhancing the observed object and other operations in accordance with the user need. The digital microscope has eliminated the need for direct observation by human eye as with the traditional microscope.

  16. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  17. Digital image processing applied Rock Art tracing

    Directory of Open Access Journals (Sweden)

    Montero Ruiz, Ignacio

    1998-06-01

    Full Text Available Adequate graphic recording has been one of the main objectives of rock art research. Photography has increased its role as a documentary technique. Now, digital image and its treatment allows new ways to observe the details of the figures and to develop a recording procedure which is as, or more, accurate than direct tracing. This technique also avoid deterioration of the rock paintings. The mathematical basis of this method is also presented.

    La correcta documentación del arte rupestre ha sido una preocupación constante por parte de los investigadores. En el desarrollo de nuevas técnicas de registro, directas e indirectas, la fotografía ha ido adquiriendo mayor protagonismo. La imagen digital y su tratamiento permiten nuevas posibilidades de observación de las figuras representadas y, en consecuencia, una lectura mediante la realización de calcos indirectos de tanta o mayor fiabilidad que la observación directa. Este sistema evita los riesgos de deterioro que provocan los calcos directos. Se incluyen las bases matemáticas que sustentan el método.

  18. Arabidopsis Growth Simulation Using Image Processing Technology

    Directory of Open Access Journals (Sweden)

    Junmei Zhang

    2014-01-01

    Full Text Available This paper aims to provide a method to represent the virtual Arabidopsis plant at each growth stage. It includes simulating the shape and providing growth parameters. The shape is described with elliptic Fourier descriptors. First, the plant is segmented from the background with the chromatic coordinates. With the segmentation result, the outer boundary series are obtained by using boundary tracking algorithm. The elliptic Fourier analysis is then carried out to extract the coefficients of the contour. The coefficients require less storage than the original contour points and can be used to simulate the shape of the plant. The growth parameters include total area and the number of leaves of the plant. The total area is obtained with the number of the plant pixels and the image calibration result. The number of leaves is derived by detecting the apex of each leaf. It is achieved by using wavelet transform to identify the local maximum of the distance signal between the contour points and the region centroid. Experiment result shows that this method can record the growth stage of Arabidopsis plant with fewer data and provide a visual platform for plant growth research.

  19. The League of Astronomers: Outreach

    Science.gov (United States)

    Paat, Anthony; Brandel, A.; Schmitz, D.; Sharma, R.; Thomas, N. H.; Trujillo, J.; Laws, C. S.; Astronomers, League of

    2014-01-01

    The University of Washington League of Astronomers (LOA) is an organization comprised of University of Washington (UW) undergraduate students. Our main goal is to share our interest in astronomy with the UW community and with the general public. The LOA hosts star parties on the UW campus and collaborates with the Seattle Astronomical Society (SAS) on larger Seattle-area star parties. At the star parties, we strive to teach our local community about what they can view in our night sky. LOA members share knowledge of how to locate constellations and use a star wheel. The relationship the LOA has with members of SAS increases both the number of events and people we are able to reach. Since the cloudy skies of the Northwest prevent winter star parties, we therefore focus our outreach on the UW Mobile Planetarium, an inflatable dome system utilizing Microsoft’s WorldWide Telescope (WWT) software. The mobile planetarium brings astronomy into the classrooms of schools unable to travel to the UW on-campus planetarium. Members of the LOA volunteer their time towards this project and we make up the majority of the Mobile Planetarium volunteers. Our outreach efforts allow us to connect with the community and enhance our own knowledge of astronomy.

  20. LGBT Workplace Issues for Astronomers

    Science.gov (United States)

    Kay, Laura E.; Danner, R.; Sellgren, K.; Dixon, V.; GLBTQastro

    2011-01-01

    Federal Equal Employment Opportunity laws and regulations do not provide protection from discrimination on the basis of sexual orientation or gender identity or gender expression. Sexual minority astronomers (including lesbian, gay, bisexual and transgender people; LGBT) can face additional challenges at school and work. Studies show that LGBT students on many campuses report experiences of harassment. Cities, counties, and states may or may not have statutes to protect against such discrimination. There is wide variation in how states and insurance plans handle legal and medical issues for transgender people. Federal law does not acknowledge same-sex partners, including those legally married in the U.S. or in other countries. Immigration rules in the U.S. (and many other, but not all) countries do not recognize same-sex partners for visas, employment, etc. State `defense of marriage act' laws have been used to remove existing domestic partner benefits at some institutions, or benefits can disappear with a change in governor. LGBT astronomers who change schools, institutions, or countries during their career may experience significant differences in their legal, medical, and marital status.

  1. Astronomers in the Chemist's War

    Science.gov (United States)

    Trimble, Virginia L.

    2012-01-01

    World War II, with radar, rockets, and "atomic" bombs was the physicists' war. And many of us know, or think we know, what our more senior colleagues did during it, with Hubble and Hoffleit at Aberdeen; M. Schwarzschild on active duty in Italy; Bondi, Gold, and Hoyle hunkered down in Dunsfeld, Surrey, talking about radar, and perhaps steady state; Greenstein and Henyey designing all-sky cameras; and many astronomers teaching navigation. World War I was The Chemists' War, featuring poison gases, the need to produce liquid fuels from coal on one side of the English Channel and to replace previously-imported dyesstuffs on the other. The talke will focus on what astronomers did and had done to them between 1914 and 1919, from Freundlich (taken prisoner on an eclipse expedition days after the outbreak of hostilities) to Edwin Hubble, returning from France without ever having quite reached the front lines. Other events bore richer fruit (Hale and the National Research Council), but very few of the stories are happy ones. Most of us have neither first nor second hand memories of The Chemists' War, but I had the pleasure of dining with a former Freundlich student a couple of weeks ago.

  2. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  3. Application of digital image processing to industrial radiography

    International Nuclear Information System (INIS)

    Bodson; Varcin; Crescenzo; Theulot

    1985-01-01

    Radiography is widely used for quality control of fabrication of large reactor components. Image processing methods are applied to industrial radiographs in order to help to take a decision as well as to reduce costs and delays for examination. Films, performed in representative operating conditions, are used to test results obtained with algorithms for the restauration of images and for the detection, characterisation of indications in order to determine the possibility of an automatic radiographs processing [fr

  4. Automated measurement of pressure injury through image processing.

    Science.gov (United States)

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure

  5. Contour extraction of echocardiographic images based on pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana [Department of Multimedia, Faculty of Computer Science and Information Technology, Department of Computer and Communication Systems Engineering, Faculty of Engineering University Putra Malaysia 43400 Serdang, Selangor (Malaysia); Zamrin, D M [Department of Surgery, Faculty of Medicine, National University of Malaysia, 56000 Cheras, Kuala Lumpur (Malaysia); Saripan, M Iqbal

    2011-02-15

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  6. Contour extraction of echocardiographic images based on pre-processing

    International Nuclear Information System (INIS)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana; Zamrin, D M; Saripan, M Iqbal

    2011-01-01

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  7. Amateur Astronomers: Secret Agents of EPO

    Science.gov (United States)

    Berendsen, M.; White, V.; Devore, E.; Reynolds, M.

    2008-06-01

    Amateur astronomers prime the public to be more interested, receptive, and excited about space science, missions, and programs. Through recent research and targeted programs, amateur astronomy outreach is being increasingly recognized by professional astronomers, educators, and other amateurs as a valued and important service. The Night Sky Network program, administered by the ASP, is the first nationwide research-based program specifically targeted to support outreach by amateur astronomers. This Network of trained and informed amateur astronomers can provide a stimulating introduction to your EPO programs as Network members share the night sky with families, students, and youth groups.

  8. Astronomical Symbolism in Australian Aboriginal Rock Art

    Science.gov (United States)

    Norris, Ray P.; Hamacher, Duane W.

    2011-05-01

    Traditional Aboriginal Australian cultures include a significant astronomical component, perpetuated through oral tradition and ceremony. This knowledge has practical navigational and calendrical functions, and sometimes extends to a deep understanding of the motion of objects in the sky. Here we explore whether this astronomical tradition is reflected in the rock art of Aboriginal Australians. We find several plausible examples of depictions of astronomical figures and symbols, and also evidence that astronomical observations were used to set out stone arrangements. However, we recognise that the case is not yet strong enough to make an unequivocal statement, and describe our plans for further research.

  9. Storing Astronomical Information on the Romanian Territory

    Science.gov (United States)

    Stavinschi, M.; Mioc, V.

    2004-12-01

    Romanian astronomy has a more than 2000-year old tradition, which is, however, little known abroad. The first known archive of astronomical information is the Dacian sanctuary at Sarmizegetusa Regia, erected in the first century AD, having similarities with that of Stonehenge. After a gap of more than 1000 years, more sources of astronomical information become available, mainly records of astronomical events. Monasteries were the safest storage places of these genuine archives. We present a classification of the ways of storing astronomical information, along with characteristic examples.

  10. Image processing by use of the digital cross-correlator

    International Nuclear Information System (INIS)

    Katou, Yoshinori

    1982-01-01

    We manufactured for trial an instrument which achieved the image processing using digital correlators. A digital correlator perform 64-bit parallel correlation at 20 MH. The output of a digital correlator is a 7-bit word representing. An A-D converter is used to quantize it a precision of six bits. The resulting 6-bit word is fed to six correlators, wired in parallel. The image processing achieved in 12 bits, whose digital outputs converted an analog signal by a D-A converter. This instrument is named the digital cross-correlator. The method which was used in the image processing system calculated the convolution with the digital correlator. It makes various digital filters. In the experiment with the image processing video signals from TV camera were used. The digital image processing time was approximately 5 μs. The contrast was enhanced and smoothed. The digital cross-correlator has the image processing of 16 sorts, and was produced inexpensively. (author)

  11. Dielectric barrier discharge image processing by Photoshop

    Science.gov (United States)

    Dong, Lifang; Li, Xuechen; Yin, Zengqian; Zhang, Qingli

    2001-09-01

    In this paper, the filamentary pattern of dielectric barrier discharge has been processed by using Photoshop, the coordinates of each filament can also be obtained. By using Photoshop two different ways have been used to analyze the spatial order of the pattern formation in dielectric barrier discharge. The results show that the distance of the neighbor filaments at U equals 14 kV and d equals 0.9 mm is about 1.8 mm. In the scope of the experimental error, the results from the two different methods are similar.

  12. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  13. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    Science.gov (United States)

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  14. Defects quantization in industrial radiographs by image processing

    International Nuclear Information System (INIS)

    Briand, F.Y.; Brillault, B.; Philipp, S.

    1988-01-01

    This paper refers to the industrial application of image processing using Non Destructive Testing by radiography. The various problems involved by the conception of a numerical tool are described. This tool intends to help radiograph experts to quantify defects and to follow up their evolution, using numerical techniques. The sequences of processings that achieve defect segmentation and quantization are detailed. They are based on the thorough knowledge of radiographs formation techniques. The process uses various methods of image analysis, including textural analysis and morphological mathematics. The interface between the final product and users will occur in an explicit language, using the terms of radiographic expertise without showing any processing details. The problem is thoroughly described: image formation, digitization, processings fitted to flaw morphology and finally product structure in progress. 12 refs [fr

  15. Video image processing for nuclear safeguards

    International Nuclear Information System (INIS)

    Rodriguez, C.A.; Howell, J.A.; Menlove, H.O.; Brislawn, C.M.; Bradley, J.N.; Chare, P.; Gorten, J.

    1995-01-01

    The field of nuclear safeguards has received increasing amounts of public attention since the events of the Iraq-UN conflict over Kuwait, the dismantlement of the former Soviet Union, and more recently, the North Korean resistance to nuclear facility inspections by the International Atomic Energy Agency (IAEA). The role of nuclear safeguards in these and other events relating to the world's nuclear material inventory is to assure safekeeping of these materials and to verify the inventory and use of nuclear materials as reported by states that have signed the nuclear Nonproliferation Treaty throughout the world. Nuclear safeguards are measures prescribed by domestic and international regulatory bodies such as DOE, NRC, IAEA, and EURATOM and implemented by the nuclear facility or the regulatory body. These measures include destructive and non destructive analysis of product materials/process by-products for materials control and accountancy purposes, physical protection for domestic safeguards, and containment and surveillance for international safeguards

  16. A software package for biomedical image processing and analysis

    International Nuclear Information System (INIS)

    Goncalves, J.G.M.; Mealha, O.

    1988-01-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developed using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an excellent tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail

  17. Digital Signal Processing for Medical Imaging Using Matlab

    CERN Document Server

    Gopi, E S

    2013-01-01

    This book describes medical imaging systems, such as X-ray, Computed tomography, MRI, etc. from the point of view of digital signal processing. Readers will see techniques applied to medical imaging such as Radon transformation, image reconstruction, image rendering, image enhancement and restoration, and more. This book also outlines the physics behind medical imaging required to understand the techniques being described. The presentation is designed to be accessible to beginners who are doing research in DSP for medical imaging. Matlab programs and illustrations are used wherever possible to reinforce the concepts being discussed.  ·         Acts as a “starter kit” for beginners doing research in DSP for medical imaging; ·         Uses Matlab programs and illustrations throughout to make content accessible, particularly with techniques such as Radon transformation and image rendering; ·         Includes discussion of the basic principles behind the various medical imaging tec...

  18. Integrating digital topology in image-processing libraries.

    Science.gov (United States)

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  19. Reducing the absorbed dose in analogue radiography of infant chest images by improving the image quality, using image processing techniques

    International Nuclear Information System (INIS)

    Karimian, A.; Yazdani, S.; Askari, M. A.

    2011-01-01

    Radiographic inspection is one of the most widely employed techniques for medical testing methods. Because of poor contrast and high un-sharpness of radiographic image quality in films, converting radiographs to a digital format and using further digital image processing is the best method of enhancing the image quality and assisting the interpreter in their evaluation. In this research work, radiographic films of 70 infant chest images with different sizes of defects were selected. To digitise the chest images and employ image processing the two algorithms (i) spatial domain and (ii) frequency domain techniques were used. The MATLAB environment was selected for processing in the digital format. Our results showed that by using these two techniques, the defects with small dimensions are detectable. Therefore, these suggested techniques may help medical specialists to diagnose the defects in the primary stages and help to prevent more repeat X-ray examination of paediatric patients. (authors)

  20. BOOK REVIEW: The Wandering Astronomer

    Science.gov (United States)

    Swinbank, Elizabeth

    2000-09-01

    Fans of Patrick Moore will like this book. I enjoyed it more than I expected, having anticipated a collection of personal anecdotes of the type favoured by certain tedious after-dinner speakers. Some of the 41 short items it contains do tend towards that category, but there are also some nuggets which might enliven your physics teaching. For example, did you know that, in a murder trial in 1787, the defendant's belief that the Sun was inhabited was cited as evidence of his insanity? This was despite his views being shared by many astronomers of the day including William Herschel. Or that Clyde Tombaugh had a cat called Pluto after the planet he discovered, which was itself named by an eleven-year-old girl? Another gem concerns a brief flurry, in the early 1990s, over a suspected planet orbiting a pulsar; variations in the arrival time of its radio pulses indicated the presence of an orbiting body. These shifts were later found to arise from an error in a computer program that corrected for the Earth's motion. The programmer had assumed a circular orbit for the Earth whereas it is actually elliptical. The book is clearly intended for amateur astronomers and followers of Patrick Moore's TV programmes. There is plenty of astronomy, with an emphasis on the solar system, but very little astrophysics. The author's metricophobia means that quantities are given in imperial units throughout, with metric equivalents added in brackets (by an editor, I suspect) which can get irritating, particularly as powers-of-ten notation is avoided. It is quite a novelty to see the temperature for hydrogen fusion quoted as 18 000 000 °F (10 000 000 °C). By way of contrast, astronomical terms are used freely - ecliptic, first-magnitude star, and so on. Such terms are defined in a glossary at the end, but attention is not drawn to this and I only stumbled across it by chance. Patrick Moore obviously knows his public, and this book will serve them well. For physics teachers and students

  1. Application of two-dimensional crystallography and image processing to atomic resolution Z-contrast images.

    Science.gov (United States)

    Morgan, David G; Ramasse, Quentin M; Browning, Nigel D

    2009-06-01

    Zone axis images recorded using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM or Z-contrast imaging) reveal the atomic structure with a resolution that is defined by the probe size of the microscope. In most cases, the full images contain many sub-images of the crystal unit cell and/or interface structure. Thanks to the repetitive nature of these images, it is possible to apply standard image processing techniques that have been developed for the electron crystallography of biological macromolecules and have been used widely in other fields of electron microscopy for both organic and inorganic materials. These methods can be used to enhance the signal-to-noise present in the original images, to remove distortions in the images that arise from either the instrumentation or the specimen itself and to quantify properties of the material in ways that are difficult without such data processing. In this paper, we describe briefly the theory behind these image processing techniques and demonstrate them for aberration-corrected, high-resolution HAADF-STEM images of Si(46) clathrates developed for hydrogen storage.

  2. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    Science.gov (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  3. Aesthetics and Composition in Deep Sky Imaging

    Science.gov (United States)

    Gendler, Robert

    It's safe to say that many of us began astrophotography feeling overwhelmed by the unnerving task of creating even the simplest astro image. Typically those first successful images were met with a healthy dose of humility as we began to understand the reality of assembling an aesthetically pleasing astronomical image. As we acquired more experience and gradually mastered the fundamentals of image processing our goals and objectives likely evolved and matured.

  4. Automatic tissue image segmentation based on image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  5. Bio-inspired approach to multistage image processing

    Science.gov (United States)

    Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan

    2017-08-01

    Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.

  6. An astronomical observatory for Peru

    Science.gov (United States)

    del Mar, Juan Quintanilla; Sicardy, Bruno; Giraldo, Víctor Ayma; Callo, Víctor Raúl Aguilar

    2011-06-01

    Peru and France are to conclude an agreement to provide Peru with an astronomical observatory equipped with a 60-cm diameter telescope. The principal aims of this project are to establish and develop research and teaching in astronomy. Since 2004, a team of researchers from Paris Observatory has been working with the University of Cusco (UNSAAC) on the educational, technical and financial aspects of implementing this venture. During an international astronomy conference in Cusco in July 2009, the foundation stone of the future Peruvian Observatory was laid at the top of Pachatusan Mountain. UNSAAC, represented by its Rector, together with the town of Oropesa and the Cusco regional authority, undertook to make the sum of 300,000€ available to the project. An agreement between Paris Observatory and UNSAAC now enables Peruvian students to study astronomy through online teaching.

  7. Asteroids astronomical and geological bodies

    CERN Document Server

    Burbine, Thomas H

    2016-01-01

    Asteroid science is a fundamental topic in planetary science and is key to furthering our understanding of planetary formation and the evolution of the Solar System. Ground-based observations and missions have provided a wealth of new data in recent years, and forthcoming missions promise further exciting results. This accessible book presents a comprehensive introduction to asteroid science, summarising the astronomical and geological characteristics of asteroids. The interdisciplinary nature of asteroid science is reflected in the broad range of topics covered, including asteroid and meteorite classification, chemical and physical properties of asteroids, observational techniques, cratering, and the discovery of asteroids and how they are named. Other chapters discuss past, present and future space missions and the threat that these bodies pose for Earth. Based on an upper-level course on asteroids and meteorites taught by the author, this book is ideal for students, researchers and professional scientists ...

  8. Astronomical Data in Undergraduate courses

    Science.gov (United States)

    Clarkson, William I.; Swift, Carrie; Hughes, Kelli; Burke, Christopher J. F.; Burgess, Colin C.; Elrod, Aunna V.; Howard, Brittany; Stahl, Lucas; Matzke, David; Bord, Donald J.

    2016-06-01

    We present status and plans for our ongoing efforts to develop data analysis and problem-solving skills through Undergraduate Astronomy instruction. While our initiatives were developed with UM-Dearborn’s student body primarily in mind, they should be applicable for a wide range of institution and of student demographics. We focus here on two strands of our effort.Firstly, students in our Introductory Astronomy (ASTR 130) general-education course now perform several “Data Investigations”, in which they interrogate the Hubble Legacy Archive to illustrate important course concepts. This was motivated in part by the realization that typical public data archives now include tools to interrogate the observations that are sufficiently accessible that introductory astronomy students can use them to perform real science, albeit mostly at a descriptive level. We are continuing to refine these investigations, and, most importantly, to critically assess their effectiveness in terms of the student learning outcomes we wish to achieve. This work is supported by grant HST-EO-13758, provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.Secondly, at the advanced-undergraduate level, students taking courses in our Astronomy minor are encouraged to gain early experience in techniques of astronomical observation and analysis that are used by professionals. We present two example projects from the Fall 2015 iteration of our upper-division course ASTR330 (The Cosmic Distance Ladder), one involving Solar System measurements, the second producing calibrated aperture photometry. For both projects students conducted, analysed, and interpreted observations using our 0.4m campus telescope, and used many of the same analysis tools as professional astronomers. This work is supported partly from a Research Initiation and Seed grant from the

  9. An overview of medical image processing methods | Maras | African ...

    African Journals Online (AJOL)

    Various standards were formed regarding these instruments and end products that are being used more frequently everyday. Personal computers (PCs) have reached a significant level in image processing, carried analysis and visualization processes which could be done with expensive hardware on doctors' desktops.

  10. Image processing system performance prediction and product quality evaluation

    Science.gov (United States)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  11. IDP: Image and data processing (software) in C++

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  12. Performance Measure as Feedback Variable in Image Processing

    Directory of Open Access Journals (Sweden)

    Ristić Danijela

    2006-01-01

    Full Text Available This paper extends the view of image processing performance measure presenting the use of this measure as an actual value in a feedback structure. The idea behind is that the control loop, which is built in that way, drives the actual feedback value to a given set point. Since the performance measure depends explicitly on the application, the inclusion of feedback structures and choice of appropriate feedback variables are presented on example of optical character recognition in industrial application. Metrics for quantification of performance at different image processing levels are discussed. The issues that those metrics should address from both image processing and control point of view are considered. The performance measures of individual processing algorithms that form a character recognition system are determined with respect to the overall system performance.

  13. SPARX, a new environment for Cryo-EM image processing.

    Science.gov (United States)

    Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

    2007-01-01

    SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.

  14. Cellular Neural Network for Real Time Image Processing

    International Nuclear Information System (INIS)

    Vagliasindi, G.; Arena, P.; Fortuna, L.; Mazzitelli, G.; Murari, A.

    2008-01-01

    Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)

  15. Volumetric image processing: A new technique for three-dimensional imaging

    International Nuclear Information System (INIS)

    Fishman, E.K.; Drebin, B.; Magid, D.; St Ville, J.A.; Zerhouni, E.A.; Siegelman, S.S.; Ney, D.R.

    1986-01-01

    Volumetric three-dimensional (3D) image processing was performed on CT scans of 25 normal hips, and image quality and potential diagnostic applications were assessed. In contrast to surface detection 3D techniques, volumetric processing preserves every pixel of transaxial CT data, replacing the gray scale with transparent ''gels'' and shading. Anatomically, accurate 3D images can be rotated and manipulated in real time, including simulated tissue layer ''peeling'' and mock surgery or disarticulation. This pilot study suggests that volumetric rendering is a major advance in signal processing of medical image data, producing a high quality, uniquely maneuverable image that is useful for fracture interpretation, soft-tissue analysis, surgical planning, and surgical rehearsal

  16. Young Astronomers' Observe with ESO Telescopes

    Science.gov (United States)

    1995-11-01

    characteristics are listed in great detail. Two are small and dense and the outermost is gaseous. It turns out that the distance to our Earth is 26 light-years and that it would be not be too easy to observe the Ikaros system from here. It is unlikely that life can develop in this planetary system during the relatively short lifetime of the central star. United Kingdom: Mr. Michael Ching, Mr. Richard Field (Teacher) (Oundle School, Peterborough) This project is directed towards variable stars of the pulsating type. It discusses the theory for these pulsations and the peculiar location of these types of stars in the HR-diagramme, as well as the technique to determine distances by means of measurements of the period. Once the period has been found observationally, the Period-Luminosity diagram makes is possible to find the luminosity and hence the distance to the star. The project also involved real measurements of an RR Lyr-type variable with a period of about 1/3 day. For this, a phototransitor and a registering device were used. The expected light variations were clearly seen. Addendum 2 A brief summary of the results obtained at ESO This brief description is based on the provisional data reduction and subsequent interpretation by the six teams, as presented during a final session on November 20, 1995. Further work will allow to quantify the results in greater detail. Each team was guided by a young ESO astronomer as Team Leader and was also provided support during the observations by ESO-astronomers Lex Kaper and Marcus Kissler, as well as by night assistants Vicente Reyes and Jesus Rodriguez (Garching), Hernan Nunez, Jorge Miranda and Victor Merino at (La Silla). For the data reduction, the teams used the MIDAS image processing system; the introduction was provided by Rein Warmels, one of ESO's experts on these matters. Since the teams that were observing during the last night (3A and 3B) had very little time to reduce and interpret their data, it was not possible to carry

  17. Low-level processing for real-time image analysis

    Science.gov (United States)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  18. Data processing for registered multimodal images and its clinical application

    International Nuclear Information System (INIS)

    Toyama, Hinako; Kobayashi, Akio; Uemura, Kouji

    1998-01-01

    We have developed two kinds of data processing methods for co-registered PET and MR images. The 3D-brain surface, representing the cortical rim in the transaxial images, was projected on a 2D-plane by utilizing Mollweide projection, which is an area-conserving method of displaying the globe as a world map. A quantitative ROI analysis on the brain surface and 3D superimposed surface display were performed by means of the 2D projection image. A clustered brain image was created by referring to the clustered 3D correlation map of resting CBF, the acetazolamide response and the hyperventilatory response, where each pixel in the brain was labeled with the color representing its cluster number. With this method, the stage of hemodynamic deficiency was evaluated in a patient with the occlusion of internal carotid artery. The differences in the brain images obtained before and after revascularized surgery was also evaluated. (author)

  19. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.

    Science.gov (United States)

    Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han

    2017-09-07

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  20. Dehydration process of fish analyzed by neutron beam imaging

    International Nuclear Information System (INIS)

    Tanoi, K.; Hamada, Y.; Seyama, S.; Saito, T.; Iikura, H.; Nakanishi, T.M.

    2009-01-01

    Since regulation of water content of the dried fish is an important factor for the quality of the fish, water-losing process during drying (squid and Japanese horse mackerel) was analyzed through neutron beam imaging. The neutron image showed that around the shoulder of mackerel, there was a part where water content was liable to maintain high during drying. To analyze water-losing process more in detail, spatial image was produced. From the images, it was clearly indicated that the decrease of water content was regulated around the shoulder part. It was suggested that to prevent deterioration around the shoulder part of the dried fish is an important factor to keep quality of the dried fish in the storage.

  1. Dynamic CT perfusion image data compression for efficient parallel processing.

    Science.gov (United States)

    Barros, Renan Sales; Olabarriaga, Silvia Delgado; Borst, Jordi; van Walderveen, Marianne A A; Posthuma, Jorrit S; Streekstra, Geert J; van Herk, Marcel; Majoie, Charles B L M; Marquering, Henk A

    2016-03-01

    The increasing size of medical imaging data, in particular time series such as CT perfusion (CTP), requires new and fast approaches to deliver timely results for acute care. Cloud architectures based on graphics processing units (GPUs) can provide the processing capacity required for delivering fast results. However, the size of CTP datasets makes transfers to cloud infrastructures time-consuming and therefore not suitable in acute situations. To reduce this transfer time, this work proposes a fast and lossless compression algorithm for CTP data. The algorithm exploits redundancies in the temporal dimension and keeps random read-only access to the image elements directly from the compressed data on the GPU. To the best of our knowledge, this is the first work to present a GPU-ready method for medical image compression with random access to the image elements from the compressed data.

  2. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David

    2013-12-01

    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  3. MATLAB-based Applications for Image Processing and Image Quality Assessment – Part II: Experimental Results

    Directory of Open Access Journals (Sweden)

    L. Krasula

    2012-04-01

    Full Text Available The paper provides an overview of some possible usage of the software described in the Part I. It contains the real examples of image quality improvement, distortion simulations, objective and subjective quality assessment and other ways of image processing that can be obtained by the individual applications.

  4. Establishing an international reference image database for research and development in medical image processing

    NARCIS (Netherlands)

    Horsch, A.D.; Prinz, M.; Schneider, S.; Sipilä, O; Spinnler, K.; Vallée, J-P; Verdonck-de Leeuw, I; Vogl, R.; Wittenberg, T.; Zahlmann, G.

    2004-01-01

    INTRODUCTION: The lack of comparability of evaluation results is one of the major obstacles of research and development in Medical Image Processing (MIP). The main reason for that is the usage of different image datasets with different quality, size and Gold standard. OBJECTIVES: Therefore, one of

  5. Automatic Optimization of Hardware Accelerators for Image Processing

    OpenAIRE

    Reiche, Oliver; Häublein, Konrad; Reichenbach, Marc; Hannig, Frank; Teich, Jürgen; Fey, Dietmar

    2015-01-01

    In the domain of image processing, often real-time constraints are required. In particular, in safety-critical applications, such as X-ray computed tomography in medical imaging or advanced driver assistance systems in the automotive domain, timing is of utmost importance. A common approach to maintain real-time capabilities of compute-intensive applications is to offload those computations to dedicated accelerator hardware, such as Field Programmable Gate Arrays (FPGAs). Programming such arc...

  6. Anesthesia Experiences During Magnetic Imaging Process on Pediatric Patients

    OpenAIRE

    Öztürk, Ömür; Üstebay, Sefer; Bilge, Ali

    2017-01-01

    We aim to study the quality of sedation and complications ratios during anesthesia applied with sodium thiopental and propofol and the reason of the magnetic imaging requests on pediatric patients retrospectively according to the hospital data. Material and Method: In this study, 109 patients, aged from 3 months to 5 years, that have been applied magnetic imaging process under anesthesia, have been examined retrospectively. Results: Pentotal sodium has been applied to 53 patients and propofol...

  7. Astronomers Take the Measure of Dark Matter in the universe

    Science.gov (United States)

    2001-09-01

    measurements of the cosmic microwave background radiation, the large-scale distribution of galaxies, and the properties of distant supernovas. The Institute of Astronomy team minimized systematic errors in their work by placing independent constraints on the masses of the clusters using data from NASA's Hubble Space Telescope and the Canada-France-Hawaii Telescope atop Mauna Kea, HI. The new Chandra results also show how the average X-ray luminosity and temperature of the hot gas varies with the mass of a cluster. These findings should allow astronomers to use the data from large cluster catalogues, for which only X-ray luminosities are generally available, to get even more accurate measurements of the mean mass density of the universe, and to understand further the processes by which clusters form and grow. The Chandra observations were carried out using the Advanced CCD Imaging Spectrometer, which was built for NASA by the Massachusetts Institute of Technology, Cambridge, and Pennsylvania State University, University Park. NASA's Marshall Space Flight Center in Huntsville, AL, manages the Chandra program, and TRW, Inc., Redondo Beach, CA, is the prime contractor for the spacecraft. The Smithsonian's Chandra X-ray Center controls science and flight operations from Cambridge, MA. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. Images associated with this release are available on the World Wide Web at: http://chandra.harvard.edu AND http://chandra.nasa.gov

  8. Pre-analytic process control: projecting a quality image.

    Science.gov (United States)

    Serafin, Mark D

    2006-09-26

    Within the health-care system, the term "ancillary department" often describes the laboratory. Thus, laboratories may find it difficult to define their image and with it, customer perception of department quality. Regulatory requirements give laboratories who so desire an elegant way to address image and perception issues--a comprehensive pre-analytic system solution. Since large laboratories use such systems--laboratory service manuals--I describe and illustrate the process for the benefit of smaller facilities. There exist resources to help even small laboratories produce a professional service manual--an elegant solution to image and customer perception of quality.

  9. SENTINEL-2 Level 1 Products and Image Processing Performances

    Science.gov (United States)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES) program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km), a high revisit (5 days with two satellites), a high resolution (10 m, 20 m and 60 m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes), the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands); and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame. The

  10. SENTINEL-2 LEVEL 1 PRODUCTS AND IMAGE PROCESSING PERFORMANCES

    Directory of Open Access Journals (Sweden)

    S. J. Baillarin

    2012-07-01

    Full Text Available In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES program, the European Space Agency (ESA is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km, a high revisit (5 days with two satellites, a high resolution (10 m, 20 m and 60 m and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains. In this context, the Centre National d'Etudes Spatiales (CNES supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes, the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands; and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame

  11. Predictive images of postoperative levator resection outcome using image processing software

    Directory of Open Access Journals (Sweden)

    Mawatari Y

    2016-09-01

    Full Text Available Yuki Mawatari,1 Mikiko Fukushima2 1Igo Ophthalmic Clinic, Kagoshima, 2Department of Ophthalmology, Faculty of Life Science, Kumamoto University, Chuo-ku, Kumamoto, Japan Purpose: This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection.Methods: Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection. Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®. Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery.Results: Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2% were satisfied with their postoperative appearances, and 55 patients (84.8% positively responded to the usefulness of processed images to predict postoperative appearance.Conclusion: Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. Keywords: levator resection, blepharoptosis, image processing, Adobe Photoshop® 

  12. An ImageJ plugin for ion beam imaging and data processing at AIFIRA facility

    Energy Technology Data Exchange (ETDEWEB)

    Devès, G.; Daudin, L. [Univ. Bordeaux, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Bessy, A.; Buga, F.; Ghanty, J.; Naar, A.; Sommar, V. [Univ. Bordeaux, F-33170 Gradignan (France); Michelet, C.; Seznec, H.; Barberet, P. [Univ. Bordeaux, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France)

    2015-04-01

    Quantification and imaging of chemical elements at the cellular level requires the use of a combination of techniques such as micro-PIXE, micro-RBS, STIM, secondary electron imaging associated with optical and fluorescence microscopy techniques employed prior to irradiation. Such a numerous set of methods generates an important amount of data per experiment. Typically for each acquisition the following data has to be processed: chemical map for each element present with a concentration above the detection limit, density and backscattered maps, mean and local spectra corresponding to relevant region of interest such as whole cell, intracellular compartment, or nanoparticles. These operations are time consuming, repetitive and as such could be source of errors in data manipulation. In order to optimize data processing, we have developed a new tool for batch data processing and imaging. This tool has been developed as a plugin for ImageJ, a versatile software for image processing that is suitable for the treatment of basic IBA data operations. Because ImageJ is written in Java, the plugin can be used under Linux, Mas OS X and Windows in both 32-bits and 64-bits modes, which may interest developers working on open-access ion beam facilities like AIFIRA. The main features of this plugin are presented here: listfile processing, spectroscopic imaging, local information extraction, quantitative density maps and database management using OMERO.

  13. Astronomers Gain Clues About Fundamental Physics

    Science.gov (United States)

    2005-12-01

    An international team of astronomers has looked at something very big -- a distant galaxy -- to study the behavior of things very small -- atoms and molecules -- to gain vital clues about the fundamental nature of our entire Universe. The team used the National Science Foundation's Robert C. Byrd Green Bank Telescope (GBT) to test whether the laws of nature have changed over vast spans of cosmic time. The Green Bank Telescope The Robert C. Byrd Green Bank Telescope CREDIT: NRAO/AUI/NSF (Click on image for GBT gallery) "The fundamental constants of physics are expected to remain fixed across space and time; that's why they're called constants! Now, however, new theoretical models for the basic structure of matter indicate that they may change. We're testing these predictions." said Nissim Kanekar, an astronomer at the National Radio Astronomy Observatory (NRAO), in Socorro, New Mexico. So far, the scientists' measurements show no change in the constants. "We've put the most stringent limits yet on some changes in these constants, but that's not the end of the story," said Christopher Carilli, another NRAO astronomer. "This is the exciting frontier where astronomy meets particle physics," Carilli explained. The research can help answer fundamental questions about whether the basic components of matter are tiny particles or tiny vibrating strings, how many dimensions the Universe has, and the nature of "dark energy." The astronomers were looking for changes in two quantities: the ratio of the masses of the electron and the proton, and a number physicists call the fine structure constant, a combination of the electron charge, the speed of light and the Planck constant. These values, considered fundamental physical constants, once were "taken as time independent, with values given once and forever" said German particle physicist Christof Wetterich. However, Wetterich explained, "the viewpoint of modern particle theory has changed in recent years," with ideas such as

  14. On the Astronomical Knowledge and Traditions of Aboriginal Australians

    Science.gov (United States)

    Hamacher, Duane W.

    2011-12-01

    Historian of science David Pingree defines science in a broad context as the process of systematically explaining perceived or imaginary phenomena. Although Westerners tend to think of science being restricted to Western culture, I argue in this thesis that astronomical scientific knowledge is found in Aboriginal traditions. Although research into the astronomical traditions of Aboriginal Australians stretches back for more than 150 years, it is relatively scant in the literature. We do know that the sun, moon, and night sky have been an important and inseparable component of the landscape to hundreds of Australian Aboriginal groups for thousands (perhaps tens-of-thousands) of years. The literature reveals that astronomical knowledge was used for time keeping, denoting seasonal change and the availability of food sources, navigation, and tidal prediction. It was also important for rituals and ceremonies, birth totems, marriage systems, cultural mnemonics, and folklore. Despite this, the field remains relatively unresearched considering the diversity of Aboriginal cultures and the length of time people have inhabited Australia (well over 40,000 years). Additionally, very little research investigating the nature and role of transient celestial phenomena has been conducted, leaving our understanding of Indigenous astronomical knowledge grossly incomplete. This thesis is an attempt to overcome this deficiency, with a specific focus on transient celestial phenomena. My research, situated in the field of cultural astronomy, draws from the sub-disciplines of archaeoastronomy, ethnoastronomy, historical astronomy, and geomythology. This approach incorporates the methodologies and theories of disciplines in the natural sciences, social sciences, and humanities. This thesis, by publication, makes use of archaeological, ethnographic, and historical records, astronomical software packages, and geographic programs to better understand the ages of astronomical traditions and the

  15. Geometric correction of radiographic images using general purpose image processing program

    International Nuclear Information System (INIS)

    Kim, Eun Kyung; Cheong, Ji Seong; Lee, Sang Hoon

    1994-01-01

    The present study was undertaken to compare geometric corrected image by general-purpose image processing program for the Apple Macintosh II computer (NIH Image, Adobe Photoshop) with standardized image by individualized custom fabricated alignment instrument. Two non-standardized periapical films with XCP film holder only were taken at the lower molar portion of 19 volunteers. Two standardized periapical films with customized XCP film holder with impression material on the bite-block were taken for each person. Geometric correction was performed with Adobe Photoshop and NIH Image program. Specially, arbitrary image rotation function of 'Adobe Photoshop' and subtraction with transparency function of 'NIH Image' were utilized. The standard deviations of grey values of subtracted images were used to measure image similarity. Average standard deviation of grey values of subtracted images if standardized group was slightly lower than that of corrected group. However, the difference was found to be statistically insignificant (p>0.05). It is considered that we can use 'NIH Image' and 'Adobe Photoshop' program for correction of nonstandardized film, taken with XCP film holder at lower molar portion.

  16. A SURVEY OF ASTRONOMICAL RESEARCH: A BASELINE FOR ASTRONOMICAL DEVELOPMENT

    International Nuclear Information System (INIS)

    Ribeiro, V. A. R. M.; Russo, P.; Cárdenas-Avendaño, A.

    2013-01-01

    Measuring scientific development is a difficult task. Different metrics have been put forward to evaluate scientific development; in this paper we explore a metric that uses the number of peer-reviewed, and when available non-peer-reviewed, research articles as an indicator of development in the field of astronomy. We analyzed the available publication record, using the Smithsonian Astrophysical Observatory/NASA Astrophysics Database System, by country affiliation in the time span between 1950 and 2011 for countries with a gross national income of less than 14,365 USD in 2010. This represents 149 countries. We propose that this metric identifies countries in ''astronomical development'' with a culture of research publishing. We also propose that for a country to develop in astronomy, it should invest in outside expert visits, send its staff abroad to study, and establish a culture of scientific publishing. Furthermore, we propose that this paper may be used as a baseline to measure the success of major international projects, such as the International Year of Astronomy 2009

  17. Instrument Remote Control via the Astronomical Instrument Markup Language

    Science.gov (United States)

    Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard

    1998-01-01

    The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.

  18. Triple Bioluminescence Imaging for In Vivo Monitoring of Cellular Processes

    Directory of Open Access Journals (Sweden)

    Casey A Maguire

    2013-01-01

    Full Text Available Bioluminescence imaging (BLI has shown to be crucial for monitoring in vivo biological processes. So far, only dual bioluminescence imaging using firefly (Fluc and Renilla or Gaussia (Gluc luciferase has been achieved due to the lack of availability of other efficiently expressed luciferases using different substrates. Here, we characterized a codon-optimized luciferase from Vargula hilgendorfii (Vluc as a reporter for mammalian gene expression. We showed that Vluc can be multiplexed with Gluc and Fluc for sequential imaging of three distinct cellular phenomena in the same biological system using vargulin, coelenterazine, and D-luciferin substrates, respectively. We applied this triple imaging system to monitor the effect of soluble tumor necrosis factor-related apoptosis-inducing ligand (sTRAIL delivered using an adeno-associated viral vector (AAV on brain tumors in mice. Vluc imaging showed efficient sTRAIL gene delivery to the brain, while Fluc imaging revealed a robust antiglioma therapy. Further, nuclear factor-κB (NF-κB activation in response to sTRAIL binding to glioma cells death receptors was monitored by Gluc imaging. This work is the first demonstration of trimodal in vivo bioluminescence imaging and will have a broad applicability in many different fields including immunology, oncology, virology, and neuroscience.

  19. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix; Egiazarian, Karen; Kautz, Jan; Pulli, Kari; Steinberger, Markus; Tsai, Yun-Ta; Rouf, Mushfiqur; Pająk, Dawid; Reddy, Dikpal; Gallo, Orazio; Liu, Jing; Heidrich, Wolfgang

    2014-01-01

    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  20. Automated processing of zebrafish imaging data: a survey.

    Science.gov (United States)

    Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-09-01

    Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.