WorldWideScience

Sample records for astronomical image processing

  1. Penn State astronomical image processing system

    International Nuclear Information System (INIS)

    Truax, R.J.; Nousek, J.A.; Feigelson, E.D.; Lonsdale, C.J.

    1987-01-01

    The needs of modern astronomy for image processing set demanding standards in simultaneously requiring fast computation speed, high-quality graphic display, large data storage, and interactive response. An innovative image processing system was designed, integrated, and used; it is based on a supermicro architecture which is tailored specifically for astronomy, which provides a highly cost-effective alternative to the traditional minicomputer installation. The paper describes the design rationale, equipment selection, and software developed to allow other astronomers with similar needs to benefit from the present experience. 9 references

  2. Application of digital image processing techniques to astronomical imagery 1977

    Science.gov (United States)

    Lorre, J. J.; Lynn, D. J.

    1978-01-01

    Nine specific techniques of combination of techniques developed for applying digital image processing technology to existing astronomical imagery are described. Photoproducts are included to illustrate the results of each of these investigations.

  3. Lessons from the masters current concepts in astronomical image processing

    CERN Document Server

    2013-01-01

    There are currently thousands of amateur astronomers around the world engaged in astrophotography at increasingly sophisticated levels. Their ranks far outnumber professional astronomers doing the same and their contributions both technically and artistically are the dominant drivers of progress in the field today. This book is a unique collaboration of individuals, all world-renowned in their particular area, and covers in detail each of the major sub-disciplines of astrophotography. This approach offers the reader the greatest opportunity to learn the most current information and the latest techniques directly from the foremost innovators in the field today.   The book as a whole covers all types of astronomical image processing, including processing of eclipses and solar phenomena, extracting detail from deep-sky, planetary, and widefield images, and offers solutions to some of the most challenging and vexing problems in astronomical image processing. Recognized chapter authors include deep sky experts su...

  4. Application of digital image processing techniques to astronomical imagery 1978

    Science.gov (United States)

    Lorre, J. J.

    1978-01-01

    Techniques for using image processing in astronomy are identified and developed for the following: (1) geometric and radiometric decalibration of vidicon-acquired spectra, (2) automatic identification and segregation of stars from galaxies; and (3) display of multiband radio maps in compact and meaningful formats. Examples are presented of these techniques applied to a variety of objects.

  5. Application of digital image processing techniques to astronomical imagery, 1979

    Science.gov (United States)

    Lorre, J. J.

    1979-01-01

    Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.

  6. Recent developments at JPL in the application of digital image processing techniques to astronomical images

    Science.gov (United States)

    Lorre, J. J.; Lynn, D. J.; Benton, W. D.

    1976-01-01

    Several techniques of a digital image-processing nature are illustrated which have proved useful in visual analysis of astronomical pictorial data. Processed digital scans of photographic plates of Stephans Quintet and NGC 4151 are used as examples to show how faint nebulosity is enhanced by high-pass filtering, how foreground stars are suppressed by linear interpolation, and how relative color differences between two images recorded on plates with different spectral sensitivities can be revealed by generating ratio images. Analyses are outlined which are intended to compensate partially for the blurring effects of the atmosphere on images of Stephans Quintet and to obtain more detailed information about Saturn's ring structure from low- and high-resolution scans of the planet and its ring system. The employment of a correlation picture to determine the tilt angle of an average spectral line in a low-quality spectrum is demonstrated for a section of the spectrum of Uranus.

  7. Youpi: A Web-based Astronomical Image Processing Pipeline

    Science.gov (United States)

    Monnerville, M.; Sémah, G.

    2010-12-01

    Youpi stands for “YOUpi is your processing PIpeline”. It is a portable, easy to use web application providing high level functionalities to perform data reduction on scientific FITS images. It is built on top of open source processing tools that are released to the community by Terapix, in order to organize your data on a computer cluster, to manage your processing jobs in real time and to facilitate teamwork by allowing fine-grain sharing of results and data. On the server side, Youpi is written in the Python programming language and uses the Django web framework. On the client side, Ajax techniques are used along with the Prototype and script.aculo.us Javascript librairies.

  8. The Vector, Signal, and Image Processing Library (VSIPL): an Open Standard for Astronomical Data Processing

    Science.gov (United States)

    Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.

    1999-12-01

    The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.

  9. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  10. A New Effort for Atmospherical Forecast: Meteorological Image Processing Software (MIPS) for Astronomical Observations

    Science.gov (United States)

    Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.

    2016-12-01

    We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.

  11. Observatory Sponsoring Astronomical Image Contest

    Science.gov (United States)

    2005-05-01

    and to provide a showcase for a broad range of astronomical research and celestial objects," Adams added. In addition, NRAO is developing enhanced data visualization techniques and data-processing recipes to assist radio astronomers in making quality images and in combining radio data with data collected at other wavelengths, such as visible-light or infrared, to make composite images. "We encourage all our telescope users to take advantage of these techniques to showcase their research," said Juan Uson, a member of the NRAO scientific staff and the observatory's EPO scientist. "All these efforts should demonstrate the vital and exciting roles that radio telescopes, radio observers, and the NRAO play in modern astronomy," Lo said. "While we want to encourage images that capture the imagination, we also want to emphasize that extra effort invested in enhanced imagery also will certainly pay off scientifically, by revealing subtleties and details that may have great significance for our understanding of astronomical objects," he added. Details of the NRAO Image Contest, which will become an annual event, are on the observatory's Web site. The observatory will announce winners on October 15. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

  12. Planetary imaging with amateur astronomical instruments

    Science.gov (United States)

    Papathanasopoulos, k.; Giannaris, G.

    2017-09-01

    Planetary imaging can be varied by the types and size of instruments and processing. With basic amateur telescopes and software, can be captured images of our planetary system, mainly Jupiter, Saturn and Mars, but also solar eclipses, solar flares, and many more. Planetary photos can be useful for professional astronomers, and how amateur astronomers can play a role on that field.

  13. Image enhancement for astronomical scenes

    Science.gov (United States)

    Lucas, Jacob; Calef, Brandoch; Knox, Keith

    2013-09-01

    Telescope images of astronomical objects and man-made satellites are frequently characterized by high dynamic range and low SNR. We consider the problem of how to enhance these images, with the aim of making them visually useful rather than radiometrically accurate. Standard contrast and histogram adjustment tends to strongly amplify noise in dark regions of the image. Sophisticated techniques have been developed to address this problem in the context of natural scenes. However, these techniques often misbehave when confronted with low-SNR scenes that are also mostly empty space. We compare two classes of algorithms: contrast-limited adaptive histogram equalization, which achieves spatial localization via a tiling of the image, and gradient-domain techniques, which perform localized contrast adjustment by non-linearly remapping the gradient of the image in a content-dependent manner. We extend these to include a priori knowledge of SNR and the processing (e.g. deconvolution) that was applied in the preparation of the image. The methods will be illustrated with images of satellites from a ground-based telescope.

  14. Astronomical pipeline processing using fuzzy logic

    Science.gov (United States)

    Shamir, Lior

    In the past few years, pipelines providing astronomical data have been becoming increasingly important. The wide use of robotic telescopes has provided significant discoveries, and sky survey projects such as SDSS and the future LSST are now considered among the premier projects in the field astronomy. The huge amount of data produced by these pipelines raises the need for automatic processing. Astronomical pipelines introduce several well-defined problems such as astronomical image compression, cosmic-ray hit rejection, transient detection, meteor triangulation and association of point sources with their corresponding known stellar objects. We developed and applied soft computing algorithms that provide new or improved solutions to these growing problems in the field of pipeline processing of astronomical data. One new approach that we use is fuzzy logic-based algorithms, which enables the automatic analysis of the astronomical pipelines and allows mining the data for not-yet-known astronomical discoveries such as optical transients and variable stars. The developed algorithms have been tested with excellent results on the NightSkyLive sky survey, which provides a pipeline of 150 astronomical pictures per hour, and covers almost the entire global night sky.

  15. Future Directions for Astronomical Image Display

    Science.gov (United States)

    Mandel, Eric

    2000-01-01

    In the "Future Directions for Astronomical Image Displav" project, the Smithsonian Astrophysical Observatory (SAO) and the National Optical Astronomy Observatories (NOAO) evolved our existing image display program into fully extensible. cross-platform image display software. We also devised messaging software to support integration of image display into astronomical analysis systems. Finally, we migrated our software from reliance on Unix and the X Window System to a platform-independent architecture that utilizes the cross-platform Tcl/Tk technology.

  16. Coronagraph for astronomical imaging and spectrophotometry

    Science.gov (United States)

    Vilas, Faith; Smith, Bradford A.

    1987-01-01

    A coronagraph designed to minimize scattered light in astronomical observations caused by the structure of the primary mirror, secondary mirror, and secondary support structure of a Cassegrainian telescope is described. Direct (1:1) and reducing (2.7:1) imaging of astronomical fields are possible. High-quality images are produced. The coronagraph can be used with either a two-dimensional charge-coupled device or photographic film camera. The addition of transmission dispersing optics converts the coronagraph into a low-resolution spectrograph. The instrument is modular and portable for transport to different observatories.

  17. Profile fitting in crowded astronomical images

    Science.gov (United States)

    Manish, Raja

    Around 18,000 known objects currently populate the near Earth space. These constitute active space assets as well as space debris objects. The tracking and cataloging of such objects relies on observations, most of which are ground based. Also, because of the great distance to the objects, only non-resolved object images can be obtained from the observations. Optical systems consist of telescope optics and a detector. Nowadays, usually CCD detectors are used. The information that is sought to be extracted from the frames are the individual object's astrometric position. In order to do so, the center of the object's image on the CCD frame has to be found. However, the observation frames that are read out of the detector are subject to noise. There are three different sources of noise: celestial background sources, the object signal itself and the sensor noise. The noise statistics are usually modeled as Gaussian or Poisson distributed or their combined distribution. In order to achieve a near real time processing, computationally fast and reliable methods for the so-called centroiding are desired; analytical methods are preferred over numerical ones of comparable accuracy. In this work, an analytic method for the centroiding is investigated and compared to numerical methods. Though the work focuses mainly on astronomical images, same principle could be applied on non-celestial images containing similar data. The method is based on minimizing weighted least squared (LS) error between observed data and the theoretical model of point sources in a novel yet simple way. Synthetic image frames have been simulated. The newly developed method is tested in both crowded and non-crowded fields where former needs additional image handling procedures to separate closely packed objects. Subsequent analysis on real celestial images corroborate the effectiveness of the approach.

  18. Blind Source Separation of Multispectral Astronomical Images

    Science.gov (United States)

    Bijaoui, Albert; Nuzillard, Danielle

    Multispectral images lead to classify pixels, but often with the drawback that each pixel value is the result of a combination of different sources. We examined the ability of Blind Source Separation (BSS) methods to restore the independent sources. We tested different tools on HST images of the Seyfert galaxy 3C120: the Karhunen-Loéve expansion based on the diagonalization of the cross correlation matrix, algorithms which maximize contrast functions and programs which take into account the cross correlation between shift sources. With the last tools we obtained similar decompositions corresponding mainly to real phenomena. BSS can be considered as an interesting exploratory tool for astronomical data mining.

  19. An adaptation of astronomical image processing enables characterization and functional 3D mapping of individual sites of excitation-contraction coupling in rat cardiac muscle.

    Science.gov (United States)

    Tian, Qinghai; Kaestner, Lars; Schröder, Laura; Guo, Jia; Lipp, Peter

    2017-11-14

    In beating cardiomyocytes, synchronized localized Ca 2+ transients from thousands of active excitation-contraction coupling sites (ECC couplons) comprising plasma and sarcoplasmic reticulum membrane calcium channels are important determinants of the heart's performance. Nevertheless, our knowledge about the properties of ECC couplons is limited by the lack of appropriate experimental and analysis strategies. We designed CaCLEAN to untangle the fundamental characteristics of ECC couplons by combining the astronomer's CLEAN algorithm with known properties of calcium diffusion. CaCLEAN empowers the investigation of fundamental properties of ECC couplons in beating cardiomyocytes without pharmacological interventions. Upon examining individual ECC couplons at the nanoscopic level, we reveal their roles in the negative amplitude-frequency relationship and in β-adrenergic stimulation, including decreasing and increasing firing reliability, respectively. CaCLEAN combined with 3D confocal imaging of beating cardiomyocytes provides a functional 3D map of active ECC couplons (on average, 17,000 per myocyte). CaCLEAN will further enlighten the ECC-couplon-remodelling processes that underlie cardiac diseases.

  20. Longwave Imaging for Astronomical Applications, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a compact portable longwave camera for astronomical applications. In Phase 1, we successfully developed the eye of the camera, i.e. the focal...

  1. PRAIA - Platform for Reduction of Astronomical Images Automatically

    Science.gov (United States)

    Assafin, M.; Vieira Martins, R.; Camargo, J. I. B.; Andrei, A. H.; Da Silva Neto, D. N.; Braga-Ribas, F.

    2011-06-01

    PRAIA performs high precision differential photometry and astrometry on digitized images (CCD frames, Schmidt plate surveys, etc). The package main characteristics are automation, accuracy and processing speed. Written in FORTRAN 77, it can run in scripts and interact with any visualization and analysis software. PRAIA is in cope with the ever growing amount of observational data available from private and public sources, including data mining and next generation fast telescope all sky surveys, like SDSS, Pan-STARRS and others. PRAIA was officially assigned as the astrometric supporting tool for participants in the GAIA-FUNSSO activities and will be freely available for the astronomical community.

  2. Wide-field ultraviolet imager for astronomical transient studies

    Science.gov (United States)

    Mathew, Joice; Ambily, S.; Prakash, Ajin; Sarpotdar, Mayuresh; Nirmal, K.; Sreejith, A. G.; Safonova, Margarita; Murthy, Jayant; Brosch, Noah

    2018-03-01

    Though the ultraviolet (UV) domain plays a vital role in the studies of astronomical transient events, the UV time-domain sky remains largely unexplored. We have designed a wide-field UV imager that can be flown on a range of available platforms, such as high-altitude balloons, CubeSats, and larger space missions. The major scientific goals are the variability of astronomical sources, detection of transients such as supernovae, novae, tidal disruption events, and characterizing active galactic nuclei variability. The instrument has a 80 mm aperture with a circular field of view of 10.8 degrees, an angular resolution of ˜22 arcsec, and a 240 - 390 nm spectral observation window. The detector for the instrument is a Microchannel Plate (MCP)-based image intensifier with both photon counting and integration capabilities. An FPGA-based detector readout mechanism and real time data processing have been implemented. The imager is designed in such a way that its lightweight and compact nature are well fitted for the CubeSat dimensions. Here we present various design and developmental aspects of this UV wide-field transient explorer.

  3. Astronomers Make First Images With Space Radio Telescope

    Science.gov (United States)

    1997-07-01

    part of the VLBA instrument, was modified over the past four years to allow it to incorporate data from the satellite. Correlation of the observational data was completed successfully on June 12, after the exact timing of the satellite recording was established. Further computer processing produced an image of PKS 1519-273 -- the first image ever produced using a radio telescope in space. For Jim Ulvestad, the NRAO astronomer who made the first image, the success ended a long quest for this new capability. Ulvestad was involved in an experiment more than a decade ago in which a NASA communications satellite, TDRSS, was used to test the idea of doing radio astronomical imaging by combining data from space and ground radio telescopes. That experiment showed that an orbiting antenna could, in fact, work in conjunction with ground-based radio observatories, and paved the way for HALCA and a planned Russian radio astronomy satellite called RadioAstron. "This first image is an important technical milestone, and demonstrates the feasibility of a much more advanced mission, ARISE, currently being considered by NASA," Ulvestad said. The first image showed no structure in the object, even at the extremely fine level of detail achievable with HALCA; it is what astronomers call a "point source." This object also appears as a point source in all-ground-based observations. In addition, the 1986 TDRSS experiment observed the object, and, while this experiment did not produce an image, it indicated that PKS 1519-273 should be a point source. "This simple point image may not appear very impressive, but its beauty to us is that it shows our entire, complex system is functioning correctly. The system includes not only the orbiting and ground-based antennas, but also the orbit determination, tracking stations, the correlator, and the image-processing software," said Jonathan Romney, the NRAO astronomer who led the development of the VLBA correlator, and its enhancement to process data

  4. More flexibility in representing geometric distortion in astronomical images

    Science.gov (United States)

    Shupe, David L.; Laher, Russ R.; Storrie-Lombardi, Lisa; Surace, Jason; Grillmair, Carl; Levitan, David; Sesar, Branimir

    2012-09-01

    A number of popular software tools in the public domain are used by astronomers, professional and amateur alike, but some of the tools that have similar purposes cannot be easily interchanged, owing to the lack of a common standard. For the case of image distortion, SCAMP and SExtractor, available from Astromatic.net, perform astrometric calibration and source-object extraction on image data, and image-data geometric distortion is computed in celestial coordinates with polynomial coefficients stored in the FITS header with the PV i_j keywords. Another widely-used astrometric-calibration service, Astrometry.net, solves for distortion in pixel coordinates using the SIP convention that was introduced by the Spitzer Science Center. Up until now, due to the complexity of these distortion representations, it was very difficult to use the output of one of these packages as input to the other. New Python software, along with faster-computing C-language translations, have been developed at the Infrared Processing and Analysis Center (IPAC) to convert FITS-image headers from PV to SIP and vice versa. It is now possible to straightforwardly use Astrometry.net for astrometric calibration and then SExtractor for source-object extraction. The new software also enables astrometric calibration by SCAMP followed by image visualization with tools that support SIP distortion, but not PV . The software has been incorporated into the image-processing pipelines of the Palomar Transient Factory (PTF), which generate FITS images with headers containing both distortion representations. The software permits the conversion of archived images, such as from the Spitzer Heritage Archive and NASA/IPAC Infrared Science Archive, from SIP to PV or vice versa. This new capability renders unnecessary any new representation, such as the proposed TPV distortion convention.

  5. Unveiling galaxies the role of images in astronomical discovery

    CERN Document Server

    Roy, Jean-René

    2017-01-01

    Galaxies are known as the building blocks of the universe, but arriving at this understanding has been a thousand-year odyssey. This journey is told through the lens of the evolving use of images as investigative tools. Initial chapters explore how early insights developed in line with new methods of scientific imaging, particularly photography. The volume then explores the impact of optical, radio and x-ray imaging techniques. The final part of the story discusses the importance of atlases of galaxies; how astronomers organised images in ways that educated, promoted ideas and pushed for new knowledge. Images that created confusion as well as advanced knowledge are included to demonstrate the challenges faced by astronomers and the long road to understanding galaxies. By examining developments in imaging, this text places the study of galaxies in its broader historical context, contributing to both astronomy and the history of science.

  6. MEMS Deformable Mirrors for Adaptive Optics in Astronomical Imaging

    Science.gov (United States)

    Cornelissen, S.; Bierden, P. A.; Bifano, T.

    We report on the development of micro-electromechanical (MEMS) deformable mirrors designed for ground and space-based astronomical instruments intended for imaging extra-solar planets. Three different deformable mirror designs, a 1024 element continuous membrane (32x32), a 4096 element continuous membrane (64x64), and a 331 hexagonal segmented tip-tilt-piston are being produced for the Planet Imaging Concept Testbed Using a Rocket Experiment (PICTURE) program, the Gemini Planet Imaging Instrument, and the visible nulling coronograph developed at JPL for NASA's TPF mission, respectively. The design of these polysilicon, surface-micromachined MEMS deformable mirrors builds on technology that was pioneered at Boston University and has been used extensively to correct for ocular aberrations in retinal imaging systems and for compensation of atmospheric turbulence in free-space laser communication. These light-weight, low power deformable mirrors will have an active aperture of up to 25.2mm consisting of thin silicon membrane mirror supported by an array of 1024 to 4096 electrostatic actuators exhibiting no hysteresis and sub-nanometer repeatability. The continuous membrane deformable mirrors, coated with a highly reflective metal film, will be capable of up to 4μm of stroke, have a surface finish of travel. New design features and fabrication processes are combined with a proven device architecture to achieve the desired performance and high reliability. Presented in this paper are device characteristic and performance results of these devices.

  7. Spectroscopy for amateur astronomers recording, processing, analysis and interpretation

    CERN Document Server

    Trypsteen , Marc F M

    2017-01-01

    This accessible guide presents the astrophysical concepts behind astronomical spectroscopy, covering both the theory and the practical elements of recording, processing, analysing and interpreting your spectra. It covers astronomical objects, such as stars, planets, nebulae, novae, supernovae, and events such as eclipses and comet passages. Suitable for anyone with only a little background knowledge and access to amateur-level equipment, the guide's many illustrations, sketches and figures will help you understand and practise this scientifically important and growing field of amateur astronomy, up to the level of Pro-Am collaborations. Accessible to non-academics, it benefits many groups from novices and learners in astronomy clubs, to advanced students and teachers of astrophysics. This volume is the perfect companion to the Spectral Atlas for Amateur Astronomers, which provides detailed commented spectral profiles of more than 100 astronomical objects.

  8. Breakthrough! 100 astronomical images that changed the world

    CERN Document Server

    Gendler, Robert

    2015-01-01

    This unique volume by two renowned astrophotographers unveils the science and history behind 100 of the most significant astronomical images of all time. The authors have carefully selected their list of images from across time and technology to bring to the reader the most relevant photographic images spanning all eras of modern astronomical history.    Based on scientific evidence today we have a basic notion of how Earth and the universe came to be. The road to this knowledge was paved with 175 years of astronomical images acquired by the coupling of two revolutionary technologies – the camera and telescope. With ingenuity and determination humankind would quickly embrace these technologies to tell the story of the cosmos and unravel its mysteries.   This book presents in pictures and words a photographic chronology of our aspiration to understand the universe. From the first fledgling attempts to photograph the Moon, planets, and stars to the marvels of orbiting observatories that record the cosmos a...

  9. Restoration of multitemporal short-exposure astronomical images

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Šimberová, Stanislava

    2005-01-01

    Roč. 3540,- (2005), s.1037-1046 ISSN 0302-9743. [SCIA 2005 /14./. Lappeenranta, 19.06.2005-22.06.2005] R&D Projects: GA AV ČR IAA2075302; GA ČR GA102/04/0155; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : astronomical image restoration * spatial probabilistic models Subject RIV: BD - Theory of Information

  10. Lossless Astronomical Image Compression and the Effects of Random Noise

    Science.gov (United States)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  11. High-Dimensional Data Reduction, Image Inpainting and their Astronomical Applications

    Science.gov (United States)

    Pesenson, M.; Pesenson, I.; Carey, S.; McCollum, B.; Roby, W.

    2009-09-01

    Technological advances are revolutionizing multispectral astrophysics as well as the detection and study of transient sources. This new era of multitemporal and multispectral data sets demands new ways of data representation, processing and management thus making data dimension reduction instrumental in efficient data organization, retrieval, analysis and information visualization. Other astrophysical applications of data dimension reduction which require new paradigms of data analysis include knowledge discovery, cluster analysis, feature extraction and object classification, de-correlating data elements, discovering meaningful patterns and finding essential representation of correlated variables that form a manifold (e.g. the manifold of galaxies), tagging astronomical images, multiscale analysis synchronized across all available wavelengths, denoising, etc. The second part of this paper is dedicated to a new, active area of image processing: image inpainting that consists of automated methods for filling in missing or damaged regions in images. Inpainting has multiple astronomical applications including restoring images corrupted by instrument artifacts, removing undesirable objects like bright stars and their halos, sky estimating, and pre-processing for the Fourier or wavelet transforms. Applications of high-dimensional data reduction and mitigation of instrument artifacts are demonstrated on images taken by the Spitzer Space Telescope.

  12. Image processing

    NARCIS (Netherlands)

    van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan; Blanken, Henk; Vries de, A.P.; Blok, H.E.; Feng, L; Feng, L.

    2007-01-01

    The field of image processing addresses handling and analysis of images for many purposes using a large number of techniques and methods. The applications of image processing range from enhancement of the visibility of cer- tain organs in medical images to object recognition for handling by

  13. Creating and enhancing digital astro images a guide for practical astronomers

    CERN Document Server

    Privett, Grant

    2007-01-01

    This book clearly examines how to create the best astronomical images possible with a digital camera. It reveals the astonishing images that can be obtained with simple equipment, the right software, and knowledge of how to use it.

  14. Sub-image data processing in Astro-WISE

    NARCIS (Netherlands)

    Mwebaze, Johnson; Boxhoorn, Danny; McFarland, John; Valentijn, Edwin A.

    Most often, astronomers are interested in a source (e.g., moving, variable, or extreme in some colour index) that lies on a few pixels of an image. However, the classical approach in astronomical data processing is the processing of the entire image or set of images even when the sole source of

  15. Developments in X-ray and astronomical CCD imagers

    International Nuclear Information System (INIS)

    Gregory, J.A.; Burke, B.E.; Kosicki, B.B.; Reich, R.K.

    1999-01-01

    There have been many recent developments in the attributes and capabilities of silicon-based CCD detectors for use in space and ground-based astronomy. The imagers used as X-ray detectors require very low noise and excellent quantum efficiency over the energy range of 200-10000 eV. This is achieved using a combination of front and back-illuminated imagers fabricated on a 5000 Ω-cm resistivity material. A requirement for ground-based imagers is very good sensitivity between 350 and 1000 nm, as well as low noise and a high degree of spatial uniformity. We will describe the fabrication and performance of these imagers. Special features integrated into the CCD pixel architecture have increased the capability of the imagers. A fast electronic shutter has been developed for a wavefront sensor in an adaptive optics system. An orthogonal transfer CCD has been designed to compensate for the image motion relative to the CCD focal plane. Also, an antiblooming drain process has been developed so bright sources do not extend spatially into adjacent pixels in back- and front-illuminated imagers. Aspects of the design, fabrication, and performance of imagers with these features will be described

  16. Image processing

    OpenAIRE

    Rino, Franco

    2014-01-01

    An image segmentation method has a training phase, and a segmentation phase. In the training phase a frame of pixel lated data from a camera is processed using information on camera characteristics to render it camera-independent. The camera independent data are processed using a chosen value of illuminant spectral characteristics to derive reflectivity data of the items in the image. Pixels of high reflectivity are established. Then, using data from the high reflectivity pixels, the actual i...

  17. Track extraction of moving targets in astronomical images based on the algorithm of NCST-PCNN

    Science.gov (United States)

    Du, Lin; Sun, Huayan; Zhang, Tinghua; Xu, Taohu

    2015-10-01

    Space targets in astronomical images such as spacecraft and space debris are always in the low level of brightness and hold a small amount of pixels, which are difficult to distinguish from fixed stars. Because of the difficulties of space target information extraction, dynamic object monitoring plays an important role in the military, aerospace and other fields, track extraction of moving targets in short-exposure astronomical images holds great significance. Firstly, capture the interesting stars by region growing method in the sequence of short-exposure images and extract the barycenter of interesting star by gray weighted method. Secondly, use adaptive threshold method to remove the error matching points and register the sequence of astronomical images. Thirdly, fuse the registered images by NCST-PCNN image fusion algorithm to hold the energy of stars in the images. Fourthly, get the difference of fused star image and final star image by subtraction of brightness value in the two images, the interesting possible moving targets will be captured by energy accumulation method. Finally, the track of moving target in astronomical images will be extracted by judging the accuracy of moving targets by track association and excluding the false moving targets. The algorithm proposed in the paper can effectively extract the moving target which is added artificially from three images or four images respectively, which verifies the effectiveness of the algorithm.

  18. Image Processing

    Science.gov (United States)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  19. Image processing

    International Nuclear Information System (INIS)

    Kindler, M.; Radtke, F.; Demel, G.

    1986-01-01

    The book is arranged in seven sections, describing various applications of volumetric analysis using image processing systems, and various methods of diagnostic evaluation of images obtained by gamma scintigraphy, cardic catheterisation, and echocardiography. A dynamic ventricular phantom is explained that has been developed for checking and calibration for safe examination of patient, the phantom allowing extensive simulation of volumetric and hemodynamic conditions of the human heart: One section discusses the program development for image processing, referring to a number of different computer systems. The equipment described includes a small non-expensive PC system, as well as a standardized nuclear medical diagnostic system, and a computer system especially suited to image processing. (orig.) [de

  20. Block iterative restoration of astronomical images with the massively parallel processor

    International Nuclear Information System (INIS)

    Heap, S.R.; Lindler, D.J.

    1987-01-01

    A method is described for algebraic image restoration capable of treating astronomical images. For a typical 500 x 500 image, direct algebraic restoration would require the solution of a 250,000 x 250,000 linear system. The block iterative approach is used to reduce the problem to solving 4900 121 x 121 linear systems. The algorithm was implemented on the Goddard Massively Parallel Processor, which can solve a 121 x 121 system in approximately 0.06 seconds. Examples are shown of the results for various astronomical images

  1. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    Science.gov (United States)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  2. Automatic optimized discovery, creation and processing of astronomical catalogs

    NARCIS (Netherlands)

    Buddelmeijer, Hugo; Boxhoorn, Danny; Valentijn, Edwin A.

    We present the design of a novel way of handling astronomical catalogs in Astro-WISE in order to achieve the scalability required for the data produced by large scale surveys. A high level of automation and abstraction is achieved in order to facilitate interoperation with visualization software for

  3. FITS Liberator: Image processing software

    Science.gov (United States)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  4. UKRVO Astronomical WEB Services

    Directory of Open Access Journals (Sweden)

    Mazhaev, O.E.

    2017-01-01

    Full Text Available Ukraine Virtual Observatory (UkrVO has been a member of the International Virtual Observatory Alliance (IVOA since 2011. The virtual observatory (VO is not a magic solution to all problems of data storing and processing, but it provides certain standards for building infrastructure of astronomical data center. The astronomical databases help data mining and offer to users an easy access to observation metadata, images within celestial sphere and results of image processing. The astronomical web services (AWS of UkrVO give to users handy tools for data selection from large astronomical catalogues for a relatively small region of interest in the sky. Examples of the AWS usage are showed.

  5. Facing "the Curse of Dimensionality": Image Fusion and Nonlinear Dimensionality Reduction for Advanced Data Mining and Visualization of Astronomical Images

    Science.gov (United States)

    Pesenson, Meyer; Pesenson, I. Z.; McCollum, B.

    2009-05-01

    The complexity of multitemporal/multispectral astronomical data sets together with the approaching petascale of such datasets and large astronomical surveys require automated or semi-automated methods for knowledge discovery. Traditional statistical methods of analysis may break down not only because of the amount of data, but mostly because of the increase of the dimensionality of data. Image fusion (combining information from multiple sensors in order to create a composite enhanced image) and dimension reduction (finding lower-dimensional representation of high-dimensional data) are effective approaches to "the curse of dimensionality,” thus facilitating automated feature selection, classification and data segmentation. Dimension reduction methods greatly increase computational efficiency of machine learning algorithms, improve statistical inference and together with image fusion enable effective scientific visualization (as opposed to mere illustrative visualization). The main approach of this work utilizes recent advances in multidimensional image processing, as well as representation of essential structure of a data set in terms of its fundamental eigenfunctions, which are used as an orthonormal basis for the data visualization and analysis. We consider multidimensional data sets and images as manifolds or combinatorial graphs and construct variational splines that minimize certain Sobolev norms. These splines allow us to reconstruct the eigenfunctions of the combinatorial Laplace operator by using only a small portion of the graph. We use the first two or three eigenfunctions for embedding large data sets into two- or three-dimensional Euclidean space. Such reduced data sets allow efficient data organization, retrieval, analysis and visualization. We demonstrate applications of the algorithms to test cases from the Spitzer Space Telescope. This work was carried out with funding from the National Geospatial-Intelligence Agency University Research Initiative

  6. Automatic Reacquisition of Satellite Positions by Detecting Their Expected Streaks in Astronomical Images

    Science.gov (United States)

    Levesque, M.

    Artificial satellites, and particularly space junk, drift continuously from their known orbits. In the surveillance-of-space context, they must be observed frequently to ensure that the corresponding orbital parameter database entries are up-to-date. Autonomous ground-based optical systems are periodically tasked to observe these objects, calculate the difference between their predicted and real positions and update object orbital parameters. The real satellite positions are provided by the detection of the satellite streaks in the astronomical images specifically acquired for this purpose. This paper presents the image processing techniques used to detect and extract the satellite positions. The methodology includes several processing steps including: image background estimation and removal, star detection and removal, an iterative matched filter for streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint objects. Simulated data were used to evaluate the methodology's performance and determine the sensitivity limits where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital parameter database.

  7. PC image processing

    International Nuclear Information System (INIS)

    Hwa, Mok Jin Il; Am, Ha Jeng Ung

    1995-04-01

    This book starts summary of digital image processing and personal computer, and classification of personal computer image processing system, digital image processing, development of personal computer and image processing, image processing system, basic method of image processing such as color image processing and video processing, software and interface, computer graphics, video image and video processing application cases on image processing like satellite image processing, color transformation of image processing in high speed and portrait work system.

  8. Astronomical imaging with the X-ray observatory Hitomi

    Science.gov (United States)

    Nakajima, Hiroshi; Hitomi Collaboration

    2017-11-01

    We report an imaging capability of the Japan-led X-ray observatory Hitomi, formerly known as ASTRO-H. It carries four scientific instruments of Soft X-ray Imager (SXI: CCD camera), Hard X-ray Imager (HXI), Soft X-ray Spectrometer, and Soft Gamma-ray Detector, allowing us to perform a wide-band high-sensitive imaging spectroscopy. We highlight the specification and the performance we obtained with primarily regard to X-ray and soft gamma-ray imaging. Primary imaging instrument SXI utilizes four large-area X-ray CCDs positioned at the focal plane of Soft X-ray Telescope (SXT-I). Imaging area with a size of 62 mm square makes a largest field of view (FoV) of 38‧ square among the focal plane X-ray detectors, which enables us to observe extended objects such as clusters of galaxies and galactic supernova remnants with a single pointing in the soft X-ray band from 0.4 to 12 keV. HXI employs the hybrid sensors consisting of four layers of double-sided silicon strip detectors and a single layer of cadmium telluride double-sided strip detector covering the energy band from 5 to 80 keV. After the successful launch of Hitomi on February 17th, 2016 and the subsequent start up of all the instruments, imaging performance of both imagers are verified as expected from the ground calibration tests. The position of the active galactic nucleus of the central galaxy NGC1275 in the Perseus cluster is precisely seen by SXI, while the positional difference of line-of-sight velocity dispersion of the hot intracluster medium is measured inside by SXS. From the observation of Crab nebula, we obtain the on-pulse hard X-ray image with HXI as well as the time-averaged image in which the torus of the pulsar wind nebula can be seen.

  9. Space Variant PSF – Deconvolution of Wide-Field Astronomical Images

    Directory of Open Access Journals (Sweden)

    M. Řeřábek

    2008-01-01

    Full Text Available The properties of UWFC (Ultra Wide-Field Camera astronomical systems along with specific visual data in astronomical images contribute to a comprehensive evaluation of the acquired image data. These systems contain many different kinds of optical aberrations which have a negatively effect on image quality and imaging system transfer characteristics, and reduce the precision of astronomical measurement. It is very important to figure two main questions out. At first: In which astrometric depend on optical aberrations? And at second: How optical aberrations affect the transfer characteristics of the whole optical system. If we define the PSF (Point Spread Function [2] of an optical system, we can use some suitable methods for restoring the original image. Optical aberration models for LSI/LSV (Linear Space Invariant/Variant [2] systems are presented in this paper. These models are based on Seidel and Zernike approximating polynomials [1]. Optical aberration models serve as suitable tool for estimating and fitting the wavefront aberration of a real optical system. Real data from the BOOTES (Burst Observer and Optical Transient Exploring System experiment is used for our simulations. Problems related to UWFC imaging systems, especially a restoration method in the presence of space variant PSF are described in this paper. A model of the space variant imaging system and partially of the space variant optical system has been implemented in MATLAB. The “brute force” method has been used for restoration of the testing images. The results of different deconvolution algorithms are demonstrated in this paper. This approach could help to improve the precision of astronomic measurements. 

  10. Near-infrared spectral imaging Michelson interferometer for astronomical applications

    Science.gov (United States)

    Wells, C. W.; Potter, A. E.; Morgan, T. H.

    1980-01-01

    The design and operation of an imaging Michelson interferometer-spectrometer used for near-infrared (0.8 micron to 2.5 microns) spectral imaging are reported. The system employs a rapid scan interferometer modified for stable low resolution (250/cm) performance and a 42 element PbS linear detector array. A microcomputer system is described which provides data acquisition, coadding, and Fourier transformation for near real-time presentation of the spectra of all 42 scene elements. The electronic and mechanical designs are discussed and telescope performance data presented.

  11. Hexabundles: imaging fiber arrays for low-light astronomical applications

    DEFF Research Database (Denmark)

    Bland-Hawthorn, Joss; Bryant, Julia; Robertson, Gordon

    2011-01-01

    We demonstrate for the first time an imaging fibre bundle (“hexabundle”) that is suitable for low-light applications in astronomy. The most successful survey instruments at optical-infrared wavelengths today have obtained data on up to a million celestial sources using hundreds of multimode fibre...

  12. Hexabundles: imaging fibre arrays for low-light astronomical applications

    DEFF Research Database (Denmark)

    Bland-Hawthorn, Joss; Bryant, Julie; Robertson, Gordon

    2010-01-01

    We demonstrate for the first time an imaging fibre bundle (“hexabundle”) that is suitable for low-light applications in astronomy. The most successful survey instruments at optical-infrared wavelengths today have obtained data on up to a million celestial sources using hundreds of multimode fibre...

  13. Imaging the Southern Sky An Amateur Astronomer's Guide

    CERN Document Server

    Chadwick, Stephen

    2012-01-01

    "If you're looking for a handy reference guide to help you image and explore the many splendors of the southern sky, Imaging the Southern Sky is the book for you. The work features not only stunning color images, all taken by Stephen Chadwick, of the best galaxies, nebulae, and clusters available to astrophotographers, but also lesser-known objects, some of which have gone largely unexplored! Beginners and experienced observers alike should appreciate the book's remarkable imagery and simple text, which provides concise and accurate information on each object and its epoch 2000.0 position, and also expert testimony on its visual nature. Each object essay also includes a section on technical information that should help astrophotographers in their planning, including telescope aperture, focal length and ratio, camera used, exposure times, and field size. As a charming bonus, the authors have taken the liberty to name many of the lesser-known objects to reflect their New Zealand heritage. Constellation by con...

  14. Gender Differences in Turkish Primary Students' Images of Astronomical Scientists: A Preliminary Study with 21st Century Style

    Science.gov (United States)

    Korkmaz, Hunkar

    2009-01-01

    This study investigated the images of astronomical scientists held by Turkish primary students by gender. The Draw an Astronomical Scientist Test was administered to 472 students from an urban area. A Chi-Square Test of Independence was used to test for statistically significant differences between gender groups. Significant differences were found…

  15. Viewing and imaging the solar system a guide for amateur astronomers

    CERN Document Server

    Clark, Jane

    2015-01-01

    Viewing and Imaging the Solar System: A Guide for Amateur Astronomers is for those who want to develop their ability to observe and image Solar System objects, including the planets and moons, the Sun, and comets and asteroids. They might be beginners, or they may have already owned and used an astronomical telescope for a year or more. Newcomers are almost always wowed by sights such as the rings of Saturn and the moons of Jupiter, but have little idea how to find these objects for themselves (with the obvious exceptions of the Sun and Moon). They also need guidance about what equipment to use, besides a telescope. This book is written by an expert on the Solar System, who has had a lot of experience with outreach programs, which teach others how to make the most of relatively simple and low-cost equipment. That does not mean that this book is not for serious amateurs. On the contrary, it is designed to show amateur astronomers, in a relatively light-hearted—and math-free way—how to become serious.

  16. ImageX: new and improved image explorer for astronomical images and beyond

    Science.gov (United States)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan

  17. Novel optical designs for consumer astronomical telescopes and their application to professional imaging

    Science.gov (United States)

    Wise, Peter; Hodgson, Alan

    2006-06-01

    Since the launch of the Hubble Space Telescope there has been widespread popular interest in astronomy. A further series of events, most notably the recent Deep Impact mission and Mars oppositions have served to fuel further interest. As a result more and more amateurs are coming into astronomy as a practical hobby. At the same time more sophisticated optical equipment is becoming available as the price to performance ratio become more favourable. As a result larger and better optical telescopes are now in use by amateurs. We also have the explosive growth in digital imaging technologies. In addition to displacing photographic film as the preferred image capture modality it has made the capture of high quality astronomical imagery more accessible to a wider segment of the astronomy community. However, this customer requirement has also had an impact on telescope design. There has become a greater imperative for wide flat image fields in these telescopes to take advantage of the ongoing advances in CCD imaging technology. As a result of these market drivers designers of consumer astronomical telescopes are now producing state of the art designs that result in wide, flat fields with optimal spatial and chromatic aberrations. Whilst some of these designs are not scalable to the larger apertures required for professional ground and airborne telescope use there are some that are eminently suited to make this transition.

  18. The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images

    Science.gov (United States)

    Berriman, G. Bruce; Good, J. C.

    2017-05-01

    The Montage Image Mosaic Engine was designed as a scalable toolkit, written in C for performance and portability across *nix platforms, that assembles FITS images into mosaics. This code is freely available and has been widely used in the astronomy and IT communities for research, product generation, and for developing next-generation cyber-infrastructure. Recently, it has begun finding applicability in the field of visualization. This development has come about because the toolkit design allows easy integration into scalable systems that process data for subsequent visualization in a browser or client. The toolkit it includes a visualization tool suitable for automation and for integration into Python: mViewer creates, with a single command, complex multi-color images overlaid with coordinate displays, labels, and observation footprints, and includes an adaptive image histogram equalization method that preserves the structure of a stretched image over its dynamic range. The Montage toolkit contains functionality originally developed to support the creation and management of mosaics, but which also offers value to visualization: a background rectification algorithm that reveals the faint structure in an image; and tools for creating cutout and downsampled versions of large images. Version 5 of Montage offers support for visualizing data written in HEALPix sky-tessellation scheme, and functionality for processing and organizing images to comply with the TOAST sky-tessellation scheme required for consumption by the World Wide Telescope (WWT). Four online tutorials allow readers to reproduce and extend all the visualizations presented in this paper.

  19. Realization of High Dynamic Range Imaging in the GLORIA Network and Its Effect on Astronomical Measurement

    Directory of Open Access Journals (Sweden)

    Stanislav Vítek

    2016-01-01

    Full Text Available Citizen science project GLORIA (GLObal Robotic-telescopes Intelligent Array is a first free- and open-access network of robotic telescopes in the world. It provides a web-based environment where users can do research in astronomy by observing with robotic telescopes and/or by analyzing data that other users have acquired with GLORIA or from other free-access databases. Network of 17 telescopes allows users to control selected telescopes in real time or schedule any more demanding observation. This paper deals with new opportunity that GLORIA project provides to teachers and students of various levels of education. At the moment, there are prepared educational materials related to events like Sun eclipse (measuring local atmosphere changes, Aurora Borealis (calculation of Northern Lights height, or transit of Venus (measurement of the Earth-Sun distance. Student should be able to learn principles of CCD imaging, spectral analysis, basic calibration like dark frames subtraction, or advanced methods of noise suppression. Every user of the network can design his own experiment. We propose advanced experiment aimed at obtaining astronomical image data with high dynamic range. We also introduce methods of objective image quality evaluation in order to discover how HDR methods are affecting astronomical measurements.

  20. Amplitude image processing by diffractive optics.

    Science.gov (United States)

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  1. Selections from 2017: Image Processing with AstroImageJ

    Science.gov (United States)

    Kohler, Susanna

    2017-12-01

    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry

  2. Image processing for the ESA Faint Object Camera

    Science.gov (United States)

    Norris, P.

    1980-10-01

    The paper describes the Faint Object Camera (FOC) for image processing which complements the NASA Space Telescope for the 1983 Shuttle launch. The data processing for removing instrument signature effects from the FOC images is discussed along with subtle errors in the data. Data processing will be accomplished by a minicomputer driving a high quality color display, with large backing disk storage; interactive techniques for selective enhancement of image features will be combined with standard scientific transformation, filtering, and analysis methods. Astronomical techniques including star finding, will be used and spectral-type searches will be obtained from astronomical data analysis institutes.

  3. Image perception and image processing

    International Nuclear Information System (INIS)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures. (orig.) [de

  4. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series

    Science.gov (United States)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-12-01

    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  5. Developing Generic Image Search Strategies for Large Astronomical Data Sets and Archives using Convolutional Neural Networks and Transfer Learning

    Science.gov (United States)

    Peek, Joshua E. G.; Hargis, Jonathan R.; Jones, Craig R.

    2018-01-01

    Astronomical instruments produce petabytes of images every year, vastly more than can be inspected by a member of the astronomical community in search of a specific population of structures. Fortunately, the sky is mostly black and source extraction algorithms have been developed to provide searchable catalogs of unconfused sources like stars and galaxies. These tools often fail for studies of more diffuse structures like the interstellar medium and unresolved stellar structures in nearby galaxies, leaving astronomers interested in observations of photodissociation regions, stellar clusters, diffuse interstellar clouds without the crucial ability to search. In this work we present a new path forward for finding structures in large data sets similar to an input structure using convolutional neural networks, transfer learning, and machine learning clustering techniques. We show applications to archival data in the Mikulski Archive for Space Telescopes (MAST).

  6. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  7. Application of digital image processing techniques to astronomical imagery 1980

    Science.gov (United States)

    Lorre, J. J.

    1981-01-01

    Topics include: (1) polar coordinate transformations (M83); (2) multispectral ratios (M82); (3) maximum entropy restoration (M87); (4) automated computation of stellar magnitudes in nebulosity; (5) color and polarization; (6) aliasing.

  8. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.

    2018-01-09

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  9. Next Generation Astronomical Data Processing using Big Data Technologies from the Apache Software Foundation

    Science.gov (United States)

    Mattmann, Chris

    2014-04-01

    In this era of exascale instruments for astronomy we must naturally develop next generation capabilities for the unprecedented data volume and velocity that will arrive due to the veracity of these ground-based sensor and observatories. Integrating scientific algorithms stewarded by scientific groups unobtrusively and rapidly; intelligently selecting data movement technologies; making use of cloud computing for storage and processing; and automatically extracting text and metadata and science from any type of file are all needed capabilities in this exciting time. Our group at NASA JPL has promoted the use of open source data management technologies available from the Apache Software Foundation (ASF) in pursuit of constructing next generation data management and processing systems for astronomical instruments including the Expanded Very Large Array (EVLA) in Socorro, NM and the Atacama Large Milimetre/Sub Milimetre Array (ALMA); as well as for the KAT-7 project led by SKA South Africa as a precursor to the full MeerKAT telescope. In addition we are funded currently by the National Science Foundation in the US to work with MIT Haystack Observatory and the University of Cambridge in the UK to construct a Radio Array of Portable Interferometric Devices (RAPID) that will undoubtedly draw from the rich technology advances underway. NASA JPL is investing in a strategic initiative for Big Data that is pulling in these capabilities and technologies for astronomical instruments and also for Earth science remote sensing. In this talk I will describe the above collaborative efforts underway and point to solutions in open source from the Apache Software Foundation that can be deployed and used today and that are already bringing our teams and projects benefits. I will describe how others can take advantage of our experience and point towards future application and contribution of these tools.

  10. Processing of medical images

    International Nuclear Information System (INIS)

    Restrepo, A.

    1998-01-01

    Thanks to the innovations in the technology for the processing of medical images, to the high development of better and cheaper computers, and, additionally, to the advances in the systems of communications of medical images, the acquisition, storage and handling of digital images has acquired great importance in all the branches of the medicine. It is sought in this article to introduce some fundamental ideas of prosecution of digital images that include such aspects as their representation, storage, improvement, visualization and understanding

  11. Image processing mini manual

    Science.gov (United States)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  12. Image Processing Software

    Science.gov (United States)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  13. Perspectives of intellectual processing of large volumes of astronomical data using neural networks

    Science.gov (United States)

    Gorbunov, A. A.; Isaev, E. A.; Samodurov, V. A.

    2018-01-01

    In the process of astronomical observations vast amounts of data are collected. BSA (Big Scanning Antenna) LPI used in the study of impulse phenomena, daily logs 87.5 GB of data (32 TB per year). This data has important implications for both short-and long-term monitoring of various classes of radio sources (including radio transients of different nature), monitoring the Earth’s ionosphere, the interplanetary and the interstellar plasma, the search and monitoring of different classes of radio sources. In the framework of the studies discovered 83096 individual pulse events (in the interval of the study highlighted July 2012 - October 2013), which may correspond to pulsars, twinkling springs, and a rapid radio transients. Detected impulse events are supposed to be used to filter subsequent observations. The study suggests approach, using the creation of the multilayered artificial neural network, which processes the input raw data and after processing, by the hidden layer, the output layer produces a class of impulsive phenomena.

  14. Colour image processing

    OpenAIRE

    Batlle i Grabulosa, Joan; Pacheco Valls, Lluís

    2008-01-01

    In the context of the round table the following topics related to image colour processing will be discussed: historical point of view. Studies of Aguilonius, Gerritsen, Newton and Maxwell. CIE standard (Commission International de lpsilaEclaraige). Colour models. RGB, HIS, etc. Colour segmentation based on HSI model. Industrial applications. Summary and discussion. At the end, video images showing the robustness of colour in front of B/W images will be presented

  15. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    Completely self-contained-and heavily illustrated-this introduction to basic concepts and methodologies for digital image processing is written at a level that truly is suitable for seniors and first...

  16. Medical image processing

    CERN Document Server

    Dougherty, Geoff

    2011-01-01

    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  17. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin

    2011-01-01

    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  18. The image processing handbook

    CERN Document Server

    Russ, John C

    2006-01-01

    Now in its fifth edition, John C. Russ's monumental image processing reference is an even more complete, modern, and hands-on tool than ever before. The Image Processing Handbook, Fifth Edition is fully updated and expanded to reflect the latest developments in the field. Written by an expert with unequalled experience and authority, it offers clear guidance on how to create, select, and use the most appropriate algorithms for a specific application. What's new in the Fifth Edition? ·       A new chapter on the human visual process that explains which visual cues elicit a response from the vie

  19. Image processing occupancy sensor

    Science.gov (United States)

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  20. Reconstructing color images of astronomical objects using black and white spectroscopic emulsions

    Science.gov (United States)

    Dufour, R. I.; Martins, D. H.

    1976-01-01

    A color photograph of the peculiar elliptical galaxy NGC 5128 (Centaurus A) has been reconstructed from three Kodak 103a emulsion type photographs by projecting positives of the three B&W plates through appropriate filters onto a conventional color film. The resulting photograph shows color balance and latitude characteristics superior to color photographs of similar astronomical objects made with commercially available conventional color film. Similar results have been obtained for color reconstructed photographs of the Large and Small Magellanic Clouds. These and other results suggest that these projection-reconstruction techniques can be used to obtain high-quality color photographs of astronomical objects which overcome many of the problems associated with the use of conventional color film for the long exposures required in astronomy.

  1. Blind Astronomers

    Science.gov (United States)

    Hockey, Thomas A.

    2011-01-01

    The phrase "blind astronomer” is used as an allegorical oxymoron. However, there were and are blind astronomers. What of famous blind astronomers? First, it must be stated that these astronomers were not martyrs to their craft. It is a myth that astronomers blind themselves by observing the Sun. As early as France's William of Saint-Cloud (circa 1290) astronomers knew that staring at the Sun was ill-advised and avoided it. Galileo Galilei did not invent the astronomical telescope and then proceed to blind himself with one. Galileo observed the Sun near sunrise and sunset or through projection. More than two decades later he became blind, as many septuagenarians do, unrelated to their profession. Even Isaac Newton temporarily blinded himself, staring at the reflection of the Sun when he was a twentysomething. But permanent Sun-induced blindness? No, it did not happen. For instance, it was a stroke that left Scotland's James Gregory (1638-1675) blind. (You will remember the Gregorian telescope.) However, he died days later. Thus, blindness little interfered with his occupation. English Abbot Richard of Wallingford (circa 1291 - circa 1335) wrote astronomical works and designed astronomical instruments. He was also blind in one eye. Yet as he further suffered from leprosy, his blindness seems the lesser of Richard's maladies. Perhaps the most famous professionally active, blind astronomer (or almost blind astronomer) is Dominique-Francois Arago (1786-1853), director until his death of the powerful nineteenth-century Paris Observatory. I will share other _ some poignant _ examples such as: William Campbell, whose blindness drove him to suicide; Leonhard Euler, astronomy's Beethoven, who did nearly half of his life's work while almost totally blind; and Edwin Frost, who "observed” a total solar eclipse while completely sightless.

  2. Image processing in radiology

    International Nuclear Information System (INIS)

    Dammann, F.

    2002-01-01

    Medical imaging processing and analysis methods have significantly improved during recent years and are now being increasingly used in clinical applications. Preprocessing algorithms are used to influence image contrast and noise. Three-dimensional visualization techniques including volume rendering and virtual endoscopy are increasingly available to evaluate sectional imaging data sets. Registration techniques have been developed to merge different examination modalities. Structures of interest can be extracted from the image data sets by various segmentation methods. Segmented structures are used for automated quantification analysis as well as for three-dimensional therapy planning, simulation and intervention guidance, including medical modelling, virtual reality environments, surgical robots and navigation systems. These newly developed methods require specialized skills for the production and postprocessing of radiological imaging data as well as new definitions of the roles of the traditional specialities. The aim of this article is to give an overview of the state-of-the-art of medical imaging processing methods, practical implications for the ragiologist's daily work and future aspects. (orig.) [de

  3. Image processing and reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Chartrand, Rick [Los Alamos National Laboratory

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  4. Image-Processing Program

    Science.gov (United States)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  5. Astronomical imaging with a low temperature InSb charge injection device (CID)

    International Nuclear Information System (INIS)

    Rouan, D.; Lacombe, F.; Tiphene, D.; Stefanovitch, D.; Phan van, D.

    1986-01-01

    InSb charge injection device (CID) technology focal plane arrays employ two coupled MIS capacitors which collect and store photon-generated charge carriers. Attention is presently given to two-dimensional arrays for 77 K and 4 K operating temperatures in astronomical applications; two such prototypes for ground observations have been developed for use with a 2-m telescope. A CID InSb array is noted to be a useful candidate for the proposed IR Space Observatory's focal plane camera. 7 references

  6. Image Processing Research

    Science.gov (United States)

    1975-09-30

    Reconstruction from DPCM Samples," USC Image Processing Institute Technical Report, USCIPI Report 560, March, 1975, pp. I£-18. 3.3 Interframe Image Coding Guner...reducing memory storage and computational requirements. A block diagram for a hybrid (two-dimensional transform)/ DPCM system is shown as figure 1. In...NO (a) FOURIER/FOURIER/ DPCM CODER to * 1.0 SITSIPIXEL/P NAME o 0.5 SITS/PI XEL/F NAME09. A 0.25 *ITSIPIXEL’PRAME a 0.1 UITS/PIXEL/FRAME 08. 0 7 0. 3S02

  7. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo

    2016-01-01

    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  8. The Groningen image processing system

    International Nuclear Information System (INIS)

    Allen, R.J.; Ekers, R.D.; Terlouw, J.P.

    1985-01-01

    This paper describes an interactive, integrated software and hardware computer system for the reduction and analysis of astronomical images. A short historical introduction is presented before some examples of the astonomical data currently handled by the system are shown. A description is given of the present hardware and software structure. The system is illustrated by describing its appearance to the user, to the applications programmer, and to the system manager. Some quantitative information on the size and cost of the system is given, and its good and bad features are discussed

  9. Astro-imaging projects for amateur astronomers a maker’s guide

    CERN Document Server

    Chung, Jim

    2015-01-01

    This is the must-have guide for all amateur astronomers who double as makers, doers, tinkerers, problem-solvers, and inventors. In a world where an amateur astronomy habit can easily run into the many thousands of dollars, it is still possible for practitioners to get high-quality results and equipment on a budget by utilizing DIY techniques. Surprisingly, it's not that hard to modify existing equipment to get new and improved usability from older or outdated technology, creating an end result that can outshine the pricey higher-end tools. All it takes is some elbow grease, a creative and open mind and the help of Chung's hard-won knowledge on building and modifying telescopes and cameras. With this book, it is possible for readers to improve their craft, making their equipment more user friendly. The tools are at hand, and the advice on how to do it is here. Readers will discover a comprehensive presentation of astronomical projects that any amateur on any budget can replicate – projects that utilize lead...

  10. Astronomical Cybersketching

    CERN Document Server

    Grego, Peter

    2009-01-01

    Outlines the techniques involved in making observational sketches and more detailed 'scientific' drawings of a wide variety of astronomical subjects using modern digital equipment; primarily PDAs and tablet PCs. This book also discusses about choosing hardware and software

  11. Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers

    Science.gov (United States)

    Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.

    2018-04-01

    Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.

  12. Small scale image processing

    International Nuclear Information System (INIS)

    Saleem, M.M.; Tahzeeb-ul-Hasan, H.

    1989-01-01

    The image processing system describes here contains, a 48K ZX spectrum input/output port using 8255A (programmable peripheral interface) chip and a video interface. The interface board is compatible with any source of composite video available from most video recorders and TV cameras. Using a low cost A/D converter ZN427E, 24 lines of 32 points each are scanned. These are used to give 8 shades of grey color. (A.B.)

  13. Robust Microarray Image Processing

    OpenAIRE

    Novikov, Eugene; Barillot, Emmanuel

    2007-01-01

    In this work we have presented a complete solution for robust, high-throughput, two-color microarray image processing comprising procedures for automatic spot localization, spot quantification and spot quality control. The spot localization algorithm is fully automatic and robust with respect to deviations from perfect spot alignment and contamination. As an input, it requires only the common array design parameters: number of blocks and number of spots in the x and y directions of the array....

  14. Quantum image processing?

    OpenAIRE

    Mastriani, Mario

    2015-01-01

    This paper presents a number of problems concerning the practical (real) implementation of the techniques known as Quantum Image Processing. The most serious problem is the recovery of the outcomes after the quantum measurement, which will be demonstrated in this work that is equivalent to a noise measurement, and it is not considered in the literature on the subject. It is noteworthy that this is due to several factors: 1) a classical algorithm that uses Dirac's notation and then it is coded...

  15. Introduction to computer image processing

    Science.gov (United States)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  16. Design of a multifunction astronomical CCD camera

    Science.gov (United States)

    Yao, Dalei; Wen, Desheng; Xue, Jianru; Chen, Zhi; Wen, Yan; Jiang, Baotan; Xi, Jiangbo

    2015-07-01

    To satisfy the requirement of the astronomical observation, a novel timing sequence of frame transfer CCD is proposed. The multiple functions such as the adjustments of work pattern, exposure time and frame frequency are achieved. There are four work patterns: normal, standby, zero exposure and test. The adjustment of exposure time can set multiple exposure time according to the astronomical observation. The fame frequency can be adjusted when dark target is imaged and the maximum exposure time cannot satisfy the requirement. On the design of the video processing, offset correction and adjustment of multiple gains are proposed. Offset correction is used for eliminating the fixed pattern noise of CCD. Three gains pattern can improve the signal to noise ratio of astronomical observation. Finally, the images in different situations are collected and the system readout noise is calculated. The calculation results show that the designs in this paper are practicable.

  17. Introduction to digital image processing

    CERN Document Server

    Pratt, William K

    2013-01-01

    CONTINUOUS IMAGE CHARACTERIZATION Continuous Image Mathematical Characterization Image RepresentationTwo-Dimensional SystemsTwo-Dimensional Fourier TransformImage Stochastic CharacterizationPsychophysical Vision Properties Light PerceptionEye PhysiologyVisual PhenomenaMonochrome Vision ModelColor Vision ModelPhotometry and ColorimetryPhotometryColor MatchingColorimetry ConceptsColor SpacesDIGITAL IMAGE CHARACTERIZATION Image Sampling and Reconstruction Image Sampling and Reconstruction ConceptsMonochrome Image Sampling SystemsMonochrome Image Reconstruction SystemsColor Image Sampling SystemsImage QuantizationScalar QuantizationProcessing Quantized VariablesMonochrome and Color Image QuantizationDISCRETE TWO-DIMENSIONAL LINEAR PROCESSING Discrete Image Mathematical Characterization Vector-Space Image RepresentationGeneralized Two-Dimensional Linear OperatorImage Statistical CharacterizationImage Probability Density ModelsLinear Operator Statistical RepresentationSuperposition and ConvolutionFinite-Area Superp...

  18. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  19. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  20. One-Shot Color Astronomical Imaging In Less Time, For Less Money!

    CERN Document Server

    Kennedy, L A

    2012-01-01

    Anyone who has seen recent pictures of the many wondrous objects in space has surely been amazed by the stunning color images. Trying to capture images like these through your own telescope has always seemed too time-consuming, expensive, and complicated. However, with improvements in affordable, easy-to-use CCD imaging technology, you can now capture amazing images yourself. With today's improved "one-shot" color imagers, high-quality images can be taken in a fraction of the time and at a fraction of the cost, right from your own backyard. This book will show you how to harness the power of today's computerized telescopes and entry-level imagers to capture spectacular images that you can share with family and friends. It covers such topics as - evaluating your existing equipment, choosing the right imager, finding targets to image, telescope alignment, focusing and framing the image, exposure times, aligning and stacking multiple frames, image calibration, and enhancement techniques! - how to expand the numb...

  1. Computer image processing and recognition

    Science.gov (United States)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  2. Image processing and recognition for biological images.

    Science.gov (United States)

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  3. Information processing in medical imaging

    International Nuclear Information System (INIS)

    Bacharach, S.L.

    1986-01-01

    The Information Processing in Medical Imaging Conference is a biennial conference, held alternately in Europe and in the United States of America. The subject of the conference is the use of computers and mathematics in medical imaging, the evaluation of new imaging techniques, image processing, image analysis, diagnostic decision-making and related fields. This volume contains analyses of in vivo digital images from nuclear medicine, ultrasound, magnetic resonance, computerized tomography, digital radiography, etc. The conference brings together expert researchers in the field and the proceedings represent the leading biennial update of developments in the field of medical image processing. (Auth.)

  4. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier

    2013-01-01

    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  5. Large Data at Small Universities: Astronomical processing using a computer classroom

    Science.gov (United States)

    Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen

    2016-06-01

    The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.

  6. Super-Drizzle: Applications of Adaptive Kernel Regression in Astronomical Imaging

    National Research Council Canada - National Science Library

    Takeda, Hiroyuki; Farsiu, Sina; Christou, Julian; Milanfar, Peyman

    2006-01-01

    .... For example, a very popular implementation of this method, as studied by Frutcher and Hook, has been used to fuse, denoise, and increase the spatial resolution of the images captured by the Hubble Space Telescope (HST...

  7. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R

    1996-01-01

    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  8. Astronomical optics

    CERN Document Server

    Schroeder, Daniel J

    1988-01-01

    Written by a recognized expert in the field, this clearly presented, well-illustrated book provides both advanced level students and professionals with an authoritative, thorough presentation of the characteristics, including advantages and limitations, of telescopes and spectrographic instruments used by astronomers of today.Key Features* Written by a recognized expert in the field* Provides both advanced level students and professionals with an authoritative, thorough presentation of the characteristics, including advantages and limitations, of telescopes and spectrographic i

  9. Trends in medical image processing

    International Nuclear Information System (INIS)

    Robilotta, C.C.

    1987-01-01

    The function of medical image processing is analysed, mentioning the developments, the physical agents, and the main categories, as conection of distortion in image formation, detectability increase, parameters quantification, etc. (C.G.C.) [pt

  10. Methods of digital image processing

    International Nuclear Information System (INIS)

    Doeler, W.

    1985-01-01

    Increasing use of computerized methods for diagnostical imaging of radiological problems will open up a wide field of applications for digital image processing. The requirements set by routine diagnostics in medical radiology point to picture data storage and documentation and communication as the main points of interest for application of digital image processing. As to the purely radiological problems, the value of digital image processing is to be sought in the improved interpretability of the image information in those cases where the expert's experience and image interpretation by human visual capacities do not suffice. There are many other domains of imaging in medical physics where digital image processing and evaluation is very useful. The paper reviews the various methods available for a variety of problem solutions, and explains the hardware available for the tasks discussed. (orig.) [de

  11. Cooperative processes in image segmentation

    Science.gov (United States)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  12. Laser Guidestar Satellite for Ground-based Adaptive Optics Imaging of Geosynchronous Satellites and Astronomical Targets

    Science.gov (United States)

    Marlow, W. A.; Cahoy, K.; Males, J.; Carlton, A.; Yoon, H.

    2015-12-01

    Real-time observation and monitoring of geostationary (GEO) satellites with ground-based imaging systems would be an attractive alternative to fielding high cost, long lead, space-based imagers, but ground-based observations are inherently limited by atmospheric turbulence. Adaptive optics (AO) systems are used to help ground telescopes achieve diffraction-limited seeing. AO systems have historically relied on the use of bright natural guide stars or laser guide stars projected on a layer of the upper atmosphere by ground laser systems. There are several challenges with this approach such as the sidereal motion of GEO objects relative to natural guide stars and limitations of ground-based laser guide stars; they cannot be used to correct tip-tilt, they are not point sources, and have finite angular sizes when detected at the receiver. There is a difference between the wavefront error measured using the guide star compared with the target due to cone effect, which also makes it difficult to use a distributed aperture system with a larger baseline to improve resolution. Inspired by previous concepts proposed by A.H. Greenaway, we present using a space-based laser guide starprojected from a satellite orbiting the Earth. We show that a nanosatellite-based guide star system meets the needs for imaging GEO objects using a low power laser even from 36,000 km altitude. Satellite guide star (SGS) systemswould be well above atmospheric turbulence and could provide a small angular size reference source. CubeSatsoffer inexpensive, frequent access to space at a fraction of the cost of traditional systems, and are now being deployed to geostationary orbits and on interplanetary trajectories. The fundamental CubeSat bus unit of 10 cm cubed can be combined in multiple units and offers a common form factor allowing for easy integration as secondary payloads on traditional launches and rapid testing of new technologies on-orbit. We describe a 6U CubeSat SGS measuring 10 cm x 20 cm x

  13. Image processing of globular clusters - Simulation for deconvolution tests (GlencoeSim)

    Science.gov (United States)

    Blazek, Martin; Pata, Petr

    2016-10-01

    This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-Spread Functions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.

  14. High Quantum Efficiency Photon-Counting Imaging Detector Development for UV (50-320 nm) Astronomical Observations

    Science.gov (United States)

    Norton, Timothy; Joseph, C.; Woodgate, B. E.; Stock, J.; Hilton, G. M.; Bertness, K.

    2011-01-01

    We are currently developing a high-quantum efficiency > 70% (peak), opaque photocathode-based, 2K x 2K pixel, zero-read-noise photon-counting detector system with the goal of enabling the highest possible sensitivity for space-based observations of ultra-faint astronomical targets in the UV.Current missions in the UV, eg HST (COS, STIS), GALEX etc although highly successful, exhibit relatively low quantum efficiency, photocathodes using cesiated p-doped GaN, by GSFC and others have obtained QEs of up to 70% at 121 nm and 50% at 180 nm, (a factor 3 - 5 better than the traditional CsI and CsTe based systems) and so are the best hope for sensitivity improvements over most of the FUV and NUV spectral range for new medium and long term missions. However, these QEs are obtained on opaque planar and nanowire photocathodes, and have not been demonstrated in microchannel plate based detectors. The only known way to use these improved photocathodes while maintaining the high QE is to use them in electron-bombarded CCD or CMOS configurations.The detector concept under investigation is based on an opaque (GaN, KBr) photocathode, magnetically focused to a back-thinned CMOS readout stage.We are currently incorporating a QE optimized KBr photocathode deposited on a stainless steel substrate with an Intevac Inc, ISIE11 EBCMOS sensor into a demountable, magnetically focused detector system, designed and built at Rutgers University, NJ in order to demonstrate high quantum efficiency photon-counting imaging performance in the FUV region. We report here progress on integration and evaluate of the system for quantum efficiency, imaging performance, photo-electron counting efficiency and dark count.

  15. Astronomical database and VO-tools of Nikolaev Astronomical Observatory

    Science.gov (United States)

    Mazhaev, A. E.; Protsyuk, Yu. I.

    2010-05-01

    , search and visualisation of spectra, spectral energy distribution (SED) building, search of cross-correlation between objects in different catalogues, statistical data processing of large data volumes etc. The second part includes database of observations, accumulated in NAO, with access via a browser. The database has a common interface for searching of textual and graphical information concerning photographic and CCD observations. The database contains: textual information about 7437 plates as well as 2700 preview images in JPEG format with resolution of 300 DPI (dots per inch); textual information about 16660 CCD frames as well as 1100 preview images in JPEG format. Absent preview images will be added to the database as soon as they will be ready after plates scanning and CCD frames processing. The user has to define the equatorial coordinates of search centre, a search radius and a period of observations. Then he or she may also specify additional filters, such as: any combination of objects given separately for plates and CCD frames, output parameters for plates, telescope names for CCD observations. Results of search are generated in the form of two tables for photographic and CCD observations. To obtain access to the source images in FITS format with support of World Coordinate System (WCS), the user has to fill and submit electronic form given after the tables. The third part includes database of observations with access via a standalone application such as Aladin, which has been developed by Strasbourg Astronomical Data Centre. To obtain access to the database, the user has to perform a series of simple actions, which are described on a corresponding site page. Then he or she may get access to the database via a server selector of Aladin, which has a menu with wide range of image and catalogue servers located world wide, including two menu items for photographic and CCD observations of a NVO image server. The user has to define the equatorial coordinates of

  16. Image processing for medical use

    International Nuclear Information System (INIS)

    Ohashi, Akinami; Iinuma, Kazuhiro

    1991-01-01

    Increasing role of diagnostic imaging is a well-established trend in medical practice. A variety of imaging techniques, such as magnetic resonance imaging, X-ray computed tomography, and ultrasound, are used, and the data obtained are displayed with increasing frequency by means of image processing. This paper focuses on three-dimensional imaging acquired by the following methods: (1) stereoscopic imaging with either binocular parallax or movement parallax, (2) multi planar reconstruction in axial, coronal, sagittal, and/or oblique views, (3) surface imaging with wire, surface, or voxell method, and (4) synthesis imaging reconstructed by axial, coronal, or sagittal view. The application of neural network, which has recently received attention, is mentioned. To provide such digital imaging data, integrated information system for image processing, retrieval, and transmission is required. Picture archiving and communication system is thus developing to improve diagnostic ability and efficiency, and economic efficiency. In the future, the development of new diagnostic equipments, integrated and systematized imaging diagnosis, and image processing-assisted diagnostics and therapeutics would be achieved. (N.K.)

  17. Digital Data Processing of Images

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  18. Image processing applications: From particle physics to society

    International Nuclear Information System (INIS)

    Sotiropoulou, C.-L.; Citraro, S.; Dell'Orso, M.; Luciano, P.; Gkaitatzis, S.; Giannetti, P.

    2017-01-01

    We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.

  19. PlotXY: A High Quality Plotting System for the Herschel Interactive Processing Environment (HIPE) and the Astronomical Community

    Science.gov (United States)

    Panuzzo, P.; Li, J.; Caux, E.

    2012-09-01

    The Herschel Interactive Processing Environment (HIPE) was developed by the European Space Agency (ESA) in collaboration with NASA and the Herschel Instrument Control Centres, to provide the astronomical community a complete environment to process and analyze the data gathered by the Herschel Space Observatory. One of the most important components of HIPE is the plotting system (named PlotXY) that we present here. With PlotXY it is possible to produce easily high quality publication-ready 2D plots. It provides a long list of features, with fully configurable components, and interactive zooming. The entire code of HIPE is written in Java and is open source released under the GNU Lesser General Public License version 3. A new version of PlotXY is being developed to be independent from the HIPE code base; it is available to the software development community for the inclusion in other projects at the URL google.com/p/jplot2d/'>http://code.google.com/p/jplot2d/.

  20. Microprocessor based image processing system

    International Nuclear Information System (INIS)

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  1. Hausdorff distance and image processing

    International Nuclear Information System (INIS)

    Sendov, Bl

    2004-01-01

    Mathematical methods for image processing make use of function spaces which are usually Banach spaces with integral L p norms. The corresponding mathematical models of the images are functions in these spaces. There are discussions here involving the value of p for which the distance between two functions is most natural when they represent images, or the metric in which our eyes measure the distance between the images. In this paper we argue that the Hausdorff distance is more natural to measure the distance (difference) between images than any L p norm

  2. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  3. Image Processing: Some Challenging Problems

    Science.gov (United States)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  4. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  5. Image processing for optical mapping.

    Science.gov (United States)

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  6. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... multiple imaging setups. This makes the system well suited for development of new processing methods and for clinical evaluations, where acquisition of the exact same scan location for multiple methods is important. The second project addressed implementation, development and evaluation of SASB using...... methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate...

  7. Image restoration and processing methods

    International Nuclear Information System (INIS)

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  8. Invitation to medical image processing

    International Nuclear Information System (INIS)

    Kitasaka, Takayuki; Suenaga, Yasuhito; Mori, Kensaku

    2010-01-01

    This medical essay explains the present state of CT image processing technology about its recognition, acquisition and visualization for computer-assisted diagnosis (CAD) and surgery (CAS), and future view. Medical image processing has a series of history of its original start from the discovery of X-ray to its application to diagnostic radiography, its combination with the computer for CT, multi-detector raw CT, leading to 3D/4D images for CAD and CAS. CAD is performed based on the recognition of normal anatomical structure of human body, detection of possible abnormal lesion and visualization of its numerical figure into image. Actual instances of CAD images are presented here for chest (lung cancer), abdomen (colorectal cancer) and future body atlas (models of organs and diseases for imaging), a recent national project: computer anatomy. CAS involves the surgical planning technology based on 3D images, navigation of the actual procedure and of endoscopy. As guidance to beginning technological image processing, described are the national and international community like related academic societies, regularly conducting congresses, textbooks and workshops, and topics in the field like computed anatomy of an individual patient for CAD and CAS, its data security and standardization. In future, protective medicine is in authors' view based on the imaging technology, e.g., daily life CAD of individuals ultimately, as exemplified in the present body thermometer and home sphygmometer, to monitor one's routine physical conditions. (T.T.)

  9. Fuzzy image processing in sun sensor

    Science.gov (United States)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  10. Astrobiology: An astronomer's perspective

    Science.gov (United States)

    Bergin, Edwin A.

    2014-12-01

    In this review we explore aspects of the field of astrobiology from an astronomical viewpoint. We therefore focus on the origin of life in the context of planetary formation, with additional emphasis on tracing the most abundant volatile elements, C, H, O, and N that are used by life on Earth. We first explore the history of life on our planet and outline the current state of our knowledge regarding the delivery of the C, H, O, N elements to the Earth. We then discuss how astronomers track the gaseous and solid molecular carriers of these volatiles throughout the process of star and planet formation. It is now clear that the early stages of star formation fosters the creation of water and simple organic molecules with enrichments of heavy isotopes. These molecules are found as ice coatings on the solid materials that represent microscopic beginnings of terrestrial worlds. Based on the meteoritic and cometary record, the process of planet formation, and the local environment, lead to additional increases in organic complexity. The astronomical connections towards this stage are only now being directly made. Although the exact details are uncertain, it is likely that the birth process of star and planets likely leads to terrestrial worlds being born with abundant water and organics on the surface.

  11. Image processing in nondestructive testing

    International Nuclear Information System (INIS)

    Janney, D.H.

    1979-01-01

    The paper examines the applicability of image processing for more certain detection of defects, making possible an increase in sampled population at little increase in cost or obtaining better radiographic resolution while using less experienced personnel. Optical methods have low cost and high speed, but are often inflexible or difficult to implement. Computerized methods can be flexible, use powerful mathematical techniques, but are difficult to implement for very high throughput. Recent developments in microprocessors and in electronic analog image analyzers may resolve the shortcomings of these two classical methods of image analysis. Examples of image processing applications in nondestructive testing include weld inspection, dimensional verification of reactor fuel assemblies, inspection of fuel pellets for laser fusion research, and medical radiography

  12. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  13. Learning the image processing pipeline.

    Science.gov (United States)

    Jiang, Haomiao; Tian, Qiyuan; Farrell, Joyce; Wandell, Brian

    2017-06-08

    Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image processing pipeline that transforms the sensor data into a form that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. We explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image processing pipeline as a large collection of local linear filters. We illustrate how the method has been used to design pipelines for novel sensor architectures in consumer photography applications.

  14. Some computer applications and digital image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Lowinger, T.

    1981-01-01

    Methods of digital image processing are applied to problems in nuclear medicine imaging. The symmetry properties of central nervous system lesions are exploited in an attempt to determine the three-dimensional radioisotope density distribution within the lesions. An algorithm developed by astronomers at the end of the 19th century to determine the distribution of matter in globular clusters is applied to tumors. This algorithm permits the emission-computed-tomographic reconstruction of spherical lesions from a single view. The three-dimensional radioisotope distribution derived by the application of the algorithm can be used to characterize the lesions. The applicability to nuclear medicine images of ten edge detection methods in general usage in digital image processing were evaluated. A general model of image formation by scintillation cameras is developed. The model assumes that objects to be imaged are composed of a finite set of points. The validity of the model has been verified by its ability to duplicate experimental results. Practical applications of this work involve quantitative assessment of the distribution of radipharmaceuticals under clinical situations and the study of image processing algorithms

  15. Digital processing of radiographic images

    Science.gov (United States)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  16. Crack detection using image processing

    International Nuclear Information System (INIS)

    Moustafa, M.A.A

    2010-01-01

    This thesis contains five main subjects in eight chapters and two appendices. The first subject discus Wiener filter for filtering images. In the second subject, we examine using different methods, as Steepest Descent Algorithm (SDA) and the Wavelet Transformation, to detect and filling the cracks, and it's applications in different areas as Nano technology and Bio-technology. In third subject, we attempt to find 3-D images from 1-D or 2-D images using texture mapping with Open Gl under Visual C ++ language programming. The fourth subject consists of the process of using the image warping methods for finding the depth of 2-D images using affine transformation, bilinear transformation, projective mapping, Mosaic warping and similarity transformation. More details about this subject will be discussed below. The fifth subject, the Bezier curves and surface, will be discussed in details. The methods for creating Bezier curves and surface with unknown distribution, using only control points. At the end of our discussion we will obtain the solid form, using the so called NURBS (Non-Uniform Rational B-Spline); which depends on: the degree of freedom, control points, knots, and an evaluation rule; and is defined as a mathematical representation of 3-D geometry that can accurately describe any shape from a simple 2-D line, circle, arc, or curve to the most complex 3-D organic free-form surface or (solid) which depends on finding the Bezier curve and creating family of curves (surface), then filling in between to obtain the solid form. Another representation for this subject is concerned with building 3D geometric models from physical objects using image-based techniques. The advantage of image techniques is that they require no expensive equipment; we use NURBS, subdivision surface and mesh for finding the depth of any image with one still view or 2D image. The quality of filtering depends on the way the data is incorporated into the model. The data should be treated with

  17. Astronomical Spectroscopy for Amateurs

    CERN Document Server

    Harrison, Ken M

    2011-01-01

    Astronomical Spectroscopy for Amateurs is a complete guide for amateur astronomers who are looking for a new challenge beyond astrophotography. The book provides a brief overview of the history and development of the spectroscope, then a short introduction to the theory of stellar spectra, including details on the necessary reference spectra required for instrument testing and spectral comparison. The various types of spectroscopes available to the amateur are then described. Later sections cover all aspects of setting up and using various types of commercially available and home-built spectroscopes, starting with basic transmission gratings and going through more complex models, all the way to the sophisticated Littrow design. The final part of the text is about practical spectroscope design and construction. This book uniquely brings together a collection of observing, analyzing, and processing hints and tips that will allow the amateur to build skills in preparing scientifically acceptable spectra data. It...

  18. Choosing and using astronomical filters

    CERN Document Server

    Griffiths, Martin

    2014-01-01

    As a casual read through any of the major amateur astronomical magazines will demonstrate, there are filters available for all aspects of optical astronomy. This book provides a ready resource on the use of the following filters, among others, for observational astronomy or for imaging: Light pollution filters Planetary filters Solar filters Neutral density filters for Moon observation Deep-sky filters, for such objects as galaxies, nebulae and more Deep-sky objects can be imaged in much greater detail than was possible many years ago. Amateur astronomers can take

  19. Fingerprint recognition using image processing

    Science.gov (United States)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  20. CMOS imagers from phototransduction to image processing

    CERN Document Server

    Etienne-Cummings, Ralph

    2004-01-01

    The idea of writing a book on CMOS imaging has been brewing for several years. It was placed on a fast track after we agreed to organize a tutorial on CMOS sensors for the 2004 IEEE International Symposium on Circuits and Systems (ISCAS 2004). This tutorial defined the structure of the book, but as first time authors/editors, we had a lot to learn about the logistics of putting together information from multiple sources. Needless to say, it was a long road between the tutorial and the book, and it took more than a few months to complete. We hope that you will find our journey worthwhile and the collated information useful. The laboratories of the authors are located at many universities distributed around the world. Their unifying theme, however, is the advancement of knowledge for the development of systems for CMOS imaging and image processing. We hope that this book will highlight the ideas that have been pioneered by the authors, while providing a roadmap for new practitioners in this field to exploit exc...

  1. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  2. Linear Algebra and Image Processing

    Science.gov (United States)

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  3. Digital Data Processing of Images

    African Journals Online (AJOL)

    lend themselves to computer storage, this article will only be concerned with the image enhancement of ... digital computer to quantitate organ function after dynamic studies using the gamma camera will also be ..... an on-line computer is necessary for the automatic analysis of data. The facility to view the dynamic process ...

  4. Image processing with ImageJ

    OpenAIRE

    Abramoff, M.D.; Magalhães, Paulo J.; Ram, Sunanda J.

    2004-01-01

    Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS). ImageJ is easy to use and can do many imaging manipulations. A very large and knowledgeable group makes up the user community for ImageJ. Topics covered are imaging abilities; cross platform; image form...

  5. Musashi dynamic image processing system

    International Nuclear Information System (INIS)

    Murata, Yutaka; Mochiki, Koh-ichi; Taguchi, Akira

    1992-01-01

    In order to produce transmitted neutron dynamic images using neutron radiography, a real time system called Musashi dynamic image processing system (MDIPS) was developed to collect, process, display and record image data. The block diagram of the MDIPS is shown. The system consists of a highly sensitive, high resolution TV camera driven by a custom-made scanner, a TV camera deflection controller for optimal scanning, which adjusts to the luminous intensity and the moving speed of an object, a real-time corrector to perform the real time correction of dark current, shading distortion and field intensity fluctuation, a real time filter for increasing the image signal to noise ratio, a video recording unit and a pseudocolor monitor to realize recording in commercially available products and monitoring by means of the CRTs in standard TV scanning, respectively. The TV camera and the TV camera deflection controller utilized for producing still images can be applied to this case. The block diagram of the real-time corrector is shown. Its performance is explained. Linear filters and ranked order filters were developed. (K.I.)

  6. Image processing, analysis, measurement, and quality

    International Nuclear Information System (INIS)

    Hughes, G.W.; Mantey, P.E.; Rogowitz, B.E.

    1988-01-01

    Topics covered in these proceedings include: image aquisition, image processing and analysis, electronic vision, IR imaging, measurement and quality, spatial vision and spatial sampling, and contrast-detail curve measurement and analysis in radiological imaging

  7. Astronomy Legacy Project - Pisgah Astronomical Research Institute

    Science.gov (United States)

    Barker, Thurburn; Castelaz, Michael W.; Rottler, Lee; Cline, J. Donald

    2016-01-01

    Pisgah Astronomical Research Institute (PARI) is a not-for-profit public foundation in North Carolina dedicated to providing hands-on educational and research opportunities for a broad cross-section of users in science, technology, engineering and math (STEM) disciplines. In November 2007 a Workshop on a National Plan for Preserving Astronomical Photographic Data (2009ASPC,410,33O, Osborn, W. & Robbins, L) was held at PARI. The result was the establishment of the Astronomical Photographic Data Archive (APDA) at PARI. In late 2013 PARI began ALP (Astronomy Legacy Project). ALP's purpose is to digitize an extensive set of twentieth century photographic astronomical data housed in APDA. Because of the wide range of types of plates, plate dimensions and emulsions found among the 40+ collections, plate digitization will require a versatile set of scanners and digitizing instruments. Internet crowdfunding was used to assist in the purchase of additional digitization equipment that were described at AstroPlate2014 Plate Preservation Workshop (www.astroplate.cz) held in Prague, CZ, March, 2014. Equipment purchased included an Epson Expression 11000XL scanner and two Nikon D800E cameras. These digital instruments will compliment a STScI GAMMA scanner now located in APDA. GAMMA will be adapted to use an electroluminescence light source and a digital camera with a telecentric lens to achieve high-speed high-resolution scanning. The 1μm precision XY stage of GAMMA will allow very precise positioning of the plate stage. Multiple overlapping CCD images of small sections of each plate, tiles, will be combined using a photo-mosaic process similar to one used in Harvard's DASCH project. Implementation of a software pipeline for the creation of a SQL database containing plate images and metadata will be based upon APPLAUSE as described by Tuvikene at AstroPlate2014 (www.astroplate.cz/programs/).

  8. The New Amateur Astronomer

    Science.gov (United States)

    Mobberley, Martin

    Amateur astronomy has changed beyond recognition in less than two decades. The reason is, of course, technology. Affordable high-quality telescopes, computer-controlled 'go to' mountings, autoguiders, CCD cameras, video, and (as always) computers and the Internet, are just a few of the advances that have revolutionized astronomy for the twenty-first century. Martin Mobberley first looks at the basics before going into an in-depth study of what’s available commercially. He then moves on to the revolutionary possibilities that are open to amateurs, from imaging, through spectroscopy and photometry, to patrolling for near-earth objects - the search for comets and asteroids that may come close to, or even hit, the earth. The New Amateur Astronomer is a road map of the new astronomy, equally suitable for newcomers who want an introduction, or old hands who need to keep abreast of innovations. From the reviews: "This is one of several dozen books in Patrick Moore's "Practical Astronomy" series. Amid this large family, Mobberley finds his niche: the beginning high-tech amateur. The book's first half discusses equipment: computer-driven telescopes, CCD cameras, imaging processing software, etc. This market is changing every bit as rapidly as the computer world, so these details will be current for only a year or two. The rest of the book offers an overview of scientific projects that serious amateurs are carrying out these days. Throughout, basic formulas and technical terms are provided as needed, without formal derivations. An appendix with useful references and Web sites is also included. Readers will need more than this book if they are considering a plunge into high-tech amateur astronomy, but it certainly will whet their appetites. Mobberley's most valuable advice will save the book's owner many times its cover price: buy a quality telescope from a reputable dealer and install it in a simple shelter so it can be used with as little set-up time as possible. A poor

  9. Eclipse: ESO C Library for an Image Processing Software Environment

    Science.gov (United States)

    Devillard, Nicolas

    2011-12-01

    Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

  10. Biographical encyclopedia of astronomers

    CERN Document Server

    Trimble, Virginia; Williams, Thomas; Bracher, Katherine; Jarrell, Richard; Marché, Jordan; Palmeri, JoAnn; Green, Daniel

    2014-01-01

    The Biographical Encyclopedia of Astronomers is a unique and valuable resource for historians and astronomers alike. It includes approx. 1850 biographical sketches on astronomers from antiquity to modern times. It is the collective work of 430 authors edited by an editorial board of 8 historians and astronomers. This reference provides biographical information on astronomers and cosmologists by utilizing contemporary historical scholarship. The fully corrected and updated second edition adds approximately 300 biographical sketches. Based on ongoing research and feedback from the community, the new entries will fill gaps and provide expansions. In addition, greater emphasis on Russo phone astronomers and radio astronomers is given. Individual entries vary from 100 to 1500 words, including the likes of the super luminaries such as Newton and Einstein, as well as lesser-known astronomers like Galileo's acolyte, Mario Guiducci.

  11. Image post-processing in dental practice.

    Science.gov (United States)

    Gormez, Ozlem; Yilmaz, Hasan Huseyin

    2009-10-01

    Image post-processing of dental digital radiographs, a function which used commonly in dental practice is presented in this article. Digital radiography has been available in dentistry for more than 25 years and its use by dental practitioners is steadily increasing. Digital acquisition of radiographs enables computer-based image post-processing to enhance image quality and increase the accuracy of interpretation. Image post-processing applications can easily be practiced in dental office by a computer and image processing programs. In this article, image post-processing operations such as image restoration, image enhancement, image analysis, image synthesis, and image compression, and their diagnostic efficacy is described. In addition this article provides general dental practitioners with a broad overview of the benefits of the different image post-processing operations to help them understand the role of that the technology can play in their practices.

  12. Image Post-Processing in Dental Practice

    Science.gov (United States)

    Gormez, Ozlem; Yilmaz, Hasan Huseyin

    2009-01-01

    Image post-processing of dental digital radiographs, a function which used commonly in dental practice is presented in this article. Digital radiography has been available in dentistry for more than 25 years and its use by dental practitioners is steadily increasing. Digital acquisition of radiographs enables computer-based image post-processing to enhance image quality and increase the accuracy of interpretation. Image post-processing applications can easily be practiced in dental office by a computer and image processing programs. In this article, image post-processing operations such as image restoration, image enhancement, image analysis, image synthesis, and image compression, and their diagnostic efficacy is described. In addition this article provides general dental practitioners with a broad overview of the benefits of the different image post-processing operations to help them understand the role of that the technology can play in their practices. PMID:19826609

  13. Astronomical Institute of Athens

    Science.gov (United States)

    Murdin, P.

    2000-11-01

    The Astronomical Institute of Athens is the oldest research institute of modern Greece (it faces the Parthenon). The Astronomical Institute (AI) of the National Observatory of Athens (NOA) started its observational projects in 1847. The modern computer and research center are housed at the Penteli Astronomical Station with major projects and international collaborations focused on extragalactic ...

  14. Fast processing of foreign fiber images by image blocking

    OpenAIRE

    Wu, Yutao; Li, Daoliang; Li, Zhenbo; Yang, Wenzhu

    2014-01-01

    In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extra...

  15. Biomedical signal and image processing.

    Science.gov (United States)

    Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro

    2011-01-01

    Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly [1], [2]. This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.

  16. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu

    2014-08-01

    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  17. Review of Biomedical Image Processing

    Directory of Open Access Journals (Sweden)

    Ciaccio Edward J

    2011-11-01

    Full Text Available Abstract This article is a review of the book: 'Biomedical Image Processing', by Thomas M. Deserno, which is published by Springer-Verlag. Salient information that will be useful to decide whether the book is relevant to topics of interest to the reader, and whether it might be suitable as a course textbook, are presented in the review. This includes information about the book details, a summary, the suitability of the text in course and research work, the framework of the book, its specific content, and conclusions.

  18. Image processing with personal computer

    International Nuclear Information System (INIS)

    Hara, Hiroshi; Handa, Madoka; Watanabe, Yoshihiko

    1990-01-01

    The method of automating the judgement works using photographs in radiation nondestructive inspection with a simple type image processor on the market was examined. The software for defect extraction and making binary and the software for automatic judgement were made for trial, and by using the various photographs on which the judgement was already done as the object, the accuracy and the problematic points were tested. According to the state of the objects to be photographed and the condition of inspection, the accuracy of judgement from 100% to 45% was obtained. The criteria for judgement were in conformity with the collection of reference photographs made by Japan Cast Steel Association. In the non-destructive inspection by radiography, the number and size of the defect images in photographs are visually judged, the results are collated with the standard, and the quality is decided. Recently, the technology of image processing with personal computers advanced, therefore by utilizing this technology, the automation of the judgement of photographs was attempted to improve the accuracy, to increase the inspection efficiency and to realize labor saving. (K.I.)

  19. Infrared thermography quantitative image processing

    Science.gov (United States)

    Skouroliakou, A.; Kalatzis, I.; Kalyvas, N.; Grivas, TB

    2017-11-01

    Infrared thermography is an imaging technique that has the ability to provide a map of temperature distribution of an object’s surface. It is considered for a wide range of applications in medicine as well as in non-destructive testing procedures. One of its promising medical applications is in orthopaedics and diseases of the musculoskeletal system where temperature distribution of the body’s surface can contribute to the diagnosis and follow up of certain disorders. Although the thermographic image can give a fairly good visual estimation of distribution homogeneity and temperature pattern differences between two symmetric body parts, it is important to extract a quantitative measurement characterising temperature. Certain approaches use temperature of enantiomorphic anatomical points, or parameters extracted from a Region of Interest (ROI). A number of indices have been developed by researchers to that end. In this study a quantitative approach in thermographic image processing is attempted based on extracting different indices for symmetric ROIs on thermograms of the lower back area of scoliotic patients. The indices are based on first order statistical parameters describing temperature distribution. Analysis and comparison of these indices result in evaluating the temperature distribution pattern of the back trunk expected in healthy, regarding spinal problems, subjects.

  20. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  1. Conference on Applications of Digital Image Processing to Astronomy, Pasadena, Calif., August 20-22, 1980, Proceedings

    Science.gov (United States)

    Elliott, D. A.

    1980-01-01

    The astronomic applications of non-military digital image processing are covered in this conference volume. Systems like CCD's, interactive data analysis facilities, stellar speckle interferometry, sky flux subsystems, guide star systems and various image processing systems are described. Techniques in photometry including filtering, automatic photometry, and image restoration are examined. Digital spectral analyses of galaxies, supernova remnants, stars and other celestial bodies are discussed together with algorithms developed to calibrate, clean up, enhance, and quantitatively analyze data. The techniques of image processing permit astronomers to make much more efficient use of their data for both subjective and quantitative analyses. Future missions, such as the Space Telescope, representing a vast data base are briefly covered.

  2. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  3. Armenian Astronomical Heritage

    Science.gov (United States)

    Mickaelian, A. M.

    2014-10-01

    A review is given on the Armenian Astronomical Heritage from ancient times to nowadays. Armenian ancient astronomy includes the division of the skies into constellations, rock art, ancient Armenian calendar, ancient observatories (such as Metsamor and Karahunge), records of astronomical events (such as Halley's Comet recorded on Tigranes the Great's coin), ancient names of celestial bodies (planets, stars, constellations), etc. The Medieval Armenian astronomy includes two more calendars, Anania Shirakatsi's scientific heritage, the record of 1054 Supernova, sky maps by Luca Vanandetsi and Mkhitar Sebastatsi, etc. Modern Armenian astronomical heritage first of all consists of the famous Byurakan Astrophysical Observatory founded in 1946 by Viktor Ambartsumian, as well as Yerevan Astronomical Observatory, Armenian Astronomical Society, Armenian Virtual Observatory, Yerevan State University Department of Astrophysics, Astrofizika journal, and brilliant young students who systematically win high positions at International Astronomical Olympiads.

  4. Image processing in real time radiography

    International Nuclear Information System (INIS)

    Link, R.; Nuding, W.; Sauevwein, K.; Souw, E.K.

    1985-01-01

    Image processing in real time radiography has become an important feature to improve the detectibility of defects. However, often enough, impressed by the tremendous success of image processing of e.g. evaluation of Landsat pictures, people expect the same or nearly the same effect in NDT applications. The magic word image processing thus results in unrealistic demands to the capability even of highly sophisticated image processing systems. In this paper the possibilities as well as the different tasks of image processing in the field of real time radiography is discussed

  5. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    Science.gov (United States)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  6. Image Post-Processing in Dental Practice

    OpenAIRE

    Gormez, Ozlem; Yilmaz, Hasan Huseyin

    2009-01-01

    Image post-processing of dental digital radiographs, a function which used commonly in dental practice is presented in this article. Digital radiography has been available in dentistry for more than 25 years and its use by dental practitioners is steadily increasing. Digital acquisition of radiographs enables computer-based image post-processing to enhance image quality and increase the accuracy of interpretation. Image post-processing applications can easily be practiced in dental office by ...

  7. Image processing technology for nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Beom; Kim, Woong Ki; Park, Soon Young

    1993-05-01

    Digital image processing technique is being actively studied since microprocessors and semiconductor memory devices have been developed in 1960's. Now image processing board for personal computer as well as image processing system for workstation is developed and widely applied to medical science, military, remote inspection, and nuclear industry. Image processing technology which provides computer system with vision ability not only recognizes nonobvious information but processes large information and therefore this technique is applied to various fields like remote measurement, object recognition and decision in adverse environment, and analysis of X-ray penetration image in nuclear facilities. In this report, various applications of image processing to nuclear facilities are examined, and image processing techniques are also analysed with the view of proposing the ideas for future applications. (Author)

  8. Nuclear medicine imaging and data processing

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1978-01-01

    The Oak Ridge Imaging System (ORIS) is a software operating system structure around the Digital Equipment Corporation's PDP-8 minicomputer which provides a complete range of image manipulation procedures. Through its modular design it remains open-ended for easy expansion to meet future needs. Already included in the system are image access routines for use with the rectilinear scanner or gamma camera (both static and flow studies); display hardware design and corresponding software; archival storage provisions; and, most important, many image processing techniques. The image processing capabilities include image defect removal, smoothing, nonlinear bounding, preparation of functional images, and transaxial emission tomography reconstruction from a limited number of views

  9. Eliminating "Hotspots" in Digital Image Processing

    Science.gov (United States)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  10. Introduction to image processing and analysis

    CERN Document Server

    Russ, John C

    2007-01-01

    ADJUSTING PIXEL VALUES Optimizing Contrast Color Correction Correcting Nonuniform Illumination Geometric Transformations Image Arithmetic NEIGHBORHOOD OPERATIONS Convolution Other Neighborhood Operations Statistical Operations IMAGE PROCESSING IN THE FOURIER DOMAIN The Fourier Transform Removing Periodic Noise Convolution and Correlation Deconvolution Other Transform Domains Compression BINARY IMAGES Thresholding Morphological Processing Other Morphological Operations Boolean Operations MEASUREMENTS Global Measurements Feature Measurements Classification APPENDIX: SOFTWARE REFERENCES AND LITERATURE INDEX.

  11. Applications Of Image Processing In Criminalistics

    Science.gov (United States)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng

    1987-01-01

    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  12. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika

    2009-01-01

    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  13. Automatic testing with digital image processing

    International Nuclear Information System (INIS)

    Daum, W.; Rose, P.; Preuss, M.; Builtjes, J.H.

    1987-01-01

    Only a small part of the various applications of use of image processing in nondestructive materials testing could be presented. Digital image processing is an aid in the evaluation of conventional testing methods as well as in the development of new testing methods. By image improvement, it increases the expressiveness of visual evaluations and makes time consuming evaluation processes easier, especially by means of quantitative image analysis. Image processing contributes a lot to automation by the possibility of interpreting picture information with the help of the computer. (orig./DG) [de

  14. Optoelectronic imaging of speckle using image processing method

    Science.gov (United States)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  15. Astronomical Software Directory Service

    Science.gov (United States)

    Hanisch, R. J.; Payne, H.; Hayes, J.

    1998-01-01

    This is the final report on the development of the Astronomical Software Directory Service (ASDS), a distributable, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URL's indexed for full-text searching.

  16. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  17. Digital image processing techniques in archaeology

    Digital Repository Service at National Institute of Oceanography (India)

    Santanam, K.; Vaithiyanathan, R.; Tripati, S.

    Digital image processing involves the manipulation and interpretation of digital images with the aid of a computer. This form of remote sensing actually began in the 1960's with a limited number of researchers analysing multispectral scanner data...

  18. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    Science.gov (United States)

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  19. Programmable remapper for image processing

    Science.gov (United States)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  20. Image processing for medical diagnosis using CNN

    Science.gov (United States)

    Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

    2003-01-01

    Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images.

  1. Image processing for medical diagnosis using CNN

    International Nuclear Information System (INIS)

    Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

    2003-01-01

    Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images

  2. Enhancement of image contrast in linacgram through image processing

    International Nuclear Information System (INIS)

    Suh, Hyun Suk; Shin, Hyun Kyo; Lee, Re Na

    2000-01-01

    Conventional radiation therapy portal images gives low contrast images. The purpose of this study was to enhance image contrast of a linacgram by developing a low--cost image processing method. Chest linacgram was obtained by irradiating humanoid phantom and scanned using Diagnostic-Pro scanner for image processing. Several types of scan method were used in scanning. These include optical density scan, histogram equalized scan, linear histogram based scan, linear histogram independent scan, linear optical density scan, logarithmic scan, and power square root scan. The histogram distribution of the scanned images were plotted and the ranges of the gray scale were compared among various scan types. The scanned images were then transformed to the gray window by pallette fitting method and the contrast of the reprocessed portal images were evaluated for image improvement. Portal images of patients were also taken at various anatomic sites and the images were processed by Gray Scale Expansion (GSE) method. The patient images were analyzed to examine the feasibility of using the GSE technique in clinic. The histogram distribution showed that minimum and maximum gray scale ranges of 3192 and 21940 were obtained when the image was scanned using logarithmic method and square root method, respectively. Out of 256 gray scale, only 7 to 30% of the steps were used. After expanding the gray scale to full range, contrast of the portal images were improved. Experiment performed with patient image showed that improved identification of organs were achieved by GSE in portal images of knee joint, head and neck, lung, and pelvis. Phantom study demonstrated that the GSE technique improved image contrast of a linacgram. This indicates that the decrease in image quality resulting from the dual exposure, could be improved by expanding the gray scale. As a result, the improved technique will make it possible to compare the digitally reconstructed radiographs (DRR) and simulation image for

  3. Image processing in diabetic related causes

    CERN Document Server

    Kumar, Amit

    2016-01-01

    This book is a collection of all the experimental results and analysis carried out on medical images of diabetic related causes. The experimental investigations have been carried out on images starting from very basic image processing techniques such as image enhancement to sophisticated image segmentation methods. This book is intended to create an awareness on diabetes and its related causes and image processing methods used to detect and forecast in a very simple way. This book is useful to researchers, Engineers, Medical Doctors and Bioinformatics researchers.

  4. Digital signal processing techniques and applications in radar image processing

    CERN Document Server

    Wang, Bu-Chin

    2008-01-01

    A self-contained approach to DSP techniques and applications in radar imagingThe processing of radar images, in general, consists of three major fields: Digital Signal Processing (DSP); antenna and radar operation; and algorithms used to process the radar images. This book brings together material from these different areas to allow readers to gain a thorough understanding of how radar images are processed.The book is divided into three main parts and covers:* DSP principles and signal characteristics in both analog and digital domains, advanced signal sampling, and

  5. Semi-automated Image Processing for Preclinical Bioluminescent Imaging.

    Science.gov (United States)

    Slavine, Nikolai V; McColl, Roderick W

    Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.

  6. Computers in Public Schools: Changing the Image with Image Processing.

    Science.gov (United States)

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  7. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    Several fingerprint matching algorithms have been developed for minutiae or template matching of fingerprint templates. The efficiency of these fingerprint matching algorithms depends on the success of the image processing and features extraction steps employed. Fingerprint image processing and analysis is hence an ...

  8. Why Do We Need Image Processing?

    Science.gov (United States)

    MacDonald, Glen

    2013-01-01

    Image processing is often viewed as arbitrarily manipulating an image to achieve an aesthetic standard or to support a preferred reality. However, image processing is more accurately defined as a means of translation between the human visual system and digital imaging devices. The human visual system does not perceive the world in the same manner as digital detectors, with display devices imposing additional noise and bandwidth restrictions. Salient differences between the human and digital detectors will be shown, along with some basic processing steps for achieving translation. Image processing must be approached in a manner consistent with the scientific method so that others may reproduce, and validate, one's results. This includes recording and reporting processing actions, and applying similar treatments to adequate control images.

  9. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang

    2014-01-01

    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  10. Matching rendered and real world images by digital image processing

    Science.gov (United States)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  11. Image processing in nondestructive testing

    International Nuclear Information System (INIS)

    Janney, D.H.

    1976-01-01

    In those applications where the principal desire is for higher throughput, the problem often becomes one of automatic feature extraction and mensuration. Classically these problems can be approached by means of either an optical image processor or an analysis in the digital computer. Optical methods have the advantages of low cost and very high speed, but are often inflexible and are sometimes very difficult to implement due to practical problems. Computerized methods can be very flexible, they can use very powerful mathematical techniques, but usually are difficult to implement for very high throughput. Recent technological developments in microprocessors and in electronic analog image analyzers may furnish the key to resolving the shortcomings of the two classical methods of image analysis

  12. Non-linear Post Processing Image Enhancement

    Science.gov (United States)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  13. Advances in image processing and pattern recognition

    International Nuclear Information System (INIS)

    Cappellini, V.

    1986-01-01

    The conference papers reported provide an authorative and permanent record of the contributions. Some papers are more theoretical or of review nature, while others contain new implementations and applications. They are conveniently grouped into the following 7 fields (after a general overview): Acquisition and Presentation of 2-D and 3-D Images; Static and Dynamic Image Processing; Determination of Object's Position and Orientation; Objects and Characters Recognition; Semantic Models and Image Understanding; Robotics and Computer Vision in Manufacturing; Specialized Processing Techniques and Structures. In particular, new digital image processing and recognition methods, implementation architectures and special advanced applications (industrial automation, robotics, remote sensing, biomedicine, etc.) are presented. (Auth.)

  14. Quantitative image processing in fluid mechanics

    Science.gov (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  15. An astronomical murder?

    Science.gov (United States)

    Belenkiy, Ari

    2010-04-01

    Ari Belenkiy examines the murder of Hypatia of Alexandria, wondering whether problems with astronomical observations and the date of Easter led to her becoming a casualty of fifth-century political intrigue.

  16. The amateur astronomer

    CERN Document Server

    Moore, Patrick

    2006-01-01

    Introduces astronomy and amateur observing together. This edition includes photographs and illustrations. The comprehensive appendices provide hints and tips, as well as data for every aspect of amateur astronomy. This work is useful for amateur astronomers

  17. Water surface capturing by image processing

    Science.gov (United States)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  18. The Astro-WISE approach to quality control for astronomical data

    NARCIS (Netherlands)

    Mc Farland, John; Helmich, Ewout M.; Valentijn, Edwin A.

    We present a novel approach to quality control during the processing of astronomical data. Quality control in the Astro-WISE Information System is integral to all aspects of data handing and provides transparent access to quality estimators for all stages of data reduction from the raw image to the

  19. Intelligent medical image processing by simulated annealing

    International Nuclear Information System (INIS)

    Ohyama, Nagaaki

    1992-01-01

    Image processing is being widely used in the medical field and already has become very important, especially when used for image reconstruction purposes. In this paper, it is shown that image processing can be classified into 4 categories; passive, active, intelligent and visual image processing. These 4 classes are explained at first through the use of several examples. The results show that the passive image processing does not give better results than the others. Intelligent image processing, then, is addressed, and the simulated annealing method is introduced. Due to the flexibility of the simulated annealing, formulated intelligence is shown to be easily introduced in an image reconstruction problem. As a practical example, 3D blood vessel reconstruction from a small number of projections, which is insufficient for conventional method to give good reconstruction, is proposed, and computer simulation clearly shows the effectiveness of simulated annealing method. Prior to the conclusion, medical file systems such as IS and C (Image Save and Carry) is pointed out to have potential for formulating knowledge, which is indispensable for intelligent image processing. This paper concludes by summarizing the advantages of simulated annealing. (author)

  20. Image processing and communications challenges 5

    CERN Document Server

    2014-01-01

    This textbook collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts. Part I deals with image processing. A comprehensive survey of different methods  of image processing, computer vision  is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered. In conclusion, the edited book comprises papers on diverse aspects of image processing  and communications systems. There are theoretical aspects as well as application papers.

  1. An intelligent object recognizer and classification system for astronomical use

    Science.gov (United States)

    Bernat, Andrew P.; Mcgraw, John T.

    1986-01-01

    An account is given of an image-processing system based on AI concepts, which allows input images produced by the CCT/Transit Instrument to be compared with a standard-object hierarchylike network of prototypes presented within the computer as 'frames'. Each frame contains information concerning either a standard object or the links among such objects. This method, by comparison to conventional, statistically-based pattern recognition systems, classifies data as an astronomer would and thereby lends credibility to its conclusions; it also furnishes a natural avenue for the machine's serendipitous discovery of new classes of objects.

  2. Digital radiography image quality: image processing and display.

    Science.gov (United States)

    Krupinski, Elizabeth A; Williams, Mark B; Andriole, Katherine; Strauss, Keith J; Applegate, Kimberly; Wyatt, Margaret; Bjork, Sandra; Seibert, J Anthony

    2007-06-01

    This article on digital radiography image processing and display is the second of two articles written as part of an intersociety effort to establish image quality standards for digital and computed radiography. The topic of the other paper is digital radiography image acquisition. The articles were developed collaboratively by the ACR, the American Association of Physicists in Medicine, and the Society for Imaging Informatics in Medicine. Increasingly, medical imaging and patient information are being managed using digital data during acquisition, transmission, storage, display, interpretation, and consultation. The management of data during each of these operations may have an impact on the quality of patient care. These articles describe what is known to improve image quality for digital and computed radiography and to make recommendations on optimal acquisition, processing, and display. The practice of digital radiography is a rapidly evolving technology that will require timely revision of any guidelines and standards.

  3. On some applications of diffusion processes for image processing

    Energy Technology Data Exchange (ETDEWEB)

    Morfu, S., E-mail: smorfu@u-bourgogne.f [Laboratoire d' Electronique, Informatique et Image (LE2i), UMR Cnrs 5158, Aile des Sciences de l' Ingenieur, BP 47870, 21078 Dijon Cedex (France)

    2009-06-29

    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  4. Cellular automata in image processing and geometry

    CERN Document Server

    Adamatzky, Andrew; Sun, Xianfang

    2014-01-01

    The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...

  5. ARTIP: Automated Radio Telescope Image Processing Pipeline

    Science.gov (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  6. Applications of Digital Image Processing 11

    Science.gov (United States)

    Cho, Y. -C.

    1988-01-01

    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  7. Astronomical Instrumentation Systems Quality Management Planning: AISQMP (Abstract)

    Science.gov (United States)

    Goldbaum, J.

    2017-12-01

    (Abstract only) The capability of small aperture astronomical instrumentation systems (AIS) to make meaningful scientific contributions has never been better. The purpose of AIS quality management planning (AISQMP) is to ensure the quality of these contributions such that they are both valid and reliable. The first step involved with AISQMP is to specify objective quality measures not just for the AIS final product, but also for the instrumentation used in its production. The next step is to set up a process to track these measures and control for any unwanted variation. The final step is continual effort applied to reducing variation and obtaining measured values near optimal theoretical performance. This paper provides an overview of AISQMP while focusing on objective quality measures applied to astronomical imaging systems.

  8. Imaging process and VIP engagement

    Directory of Open Access Journals (Sweden)

    Starčević Slađana

    2007-01-01

    Full Text Available It's often quoted that celebrity endorsement advertising has been recognized as "an ubiquitous feature of the modern marketing". The researches have shown that this kind of engagement has been producing significantly more favorable reactions of consumers, that is, a higher level of an attention for the advertising messages, a better recall of the message and a brand name, more favorable evaluation and purchasing intentions of the brand, in regard to engagement of the non-celebrity endorsers. A positive influence on a firm's profitability and prices of stocks has also been shown. Therefore marketers leaded by the belief that celebrities represent the effective ambassadors in building of positive brand image or company image and influence an improvement of the competitive position, invest enormous amounts of money for signing the contracts with them. However, this strategy doesn't guarantee success in any case, because it's necessary to take into account many factors. This paper summarizes the results of previous researches in this field and also the recommendations for a more effective use of this kind of advertising.

  9. Multispectral image enhancement processing for microsat-borne imager

    Science.gov (United States)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  10. International Conference on Image Processing and Communications

    CERN Document Server

    2016-01-01

    This book contains papers accepted for IP&C 2015, the International Conference on Image Processing and Communications, held at UTP University of Science and Technology, Bydgoszcz, Poland, September 9-11, 2015. This conference was the eighth edition in the IP&C series of annual conferences. This book and the conference have the aim to bring together researchers and scientists in the broad fields of image processing and communications, addressing recent advances in theory, methodology and applications. The book will be of interest to a large group of researchers, engineers and practitioners in image processing and communications.

  11. Crack Length Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    1990-01-01

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better then that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  12. Crack Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal, Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better than that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  13. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  14. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance...-Type text/plain; charset=UTF-8 4. IMAGE PROCE:>SINGTOO~IQUE3FOR RmOTE SmSING DATA M. R. RAIirnH KUMAR National Institute of Oceanography, Dona PaUla, Goa-403004. Digital image processing is used for improvement of pictorial information for human...

  15. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  16. Processing Welding Images For Robot Control

    Science.gov (United States)

    Richardson, Richard W.

    1988-01-01

    Image data from two distinct windows used to locate weld features. Analyzer part of vision system described in companion article, "Image Control in Automatic Welding Vision System" (MFS-26035). Horizontal video lines define windows for viewing unwelded joint and weld pool. Data from picture elements outside windows not processed. Widely-separated local features carry no significance, but closely spaced features indicate welding feature. Image processor assigns confidence level to group of local features according to spacing and pattern.

  17. MR imaging of abnormal synovial processes

    International Nuclear Information System (INIS)

    Quinn, S.F.; Sanchez, R.; Murray, W.T.; Silbiger, M.L.; Ogden, J.; Cochran, C.

    1987-01-01

    MR imaging can directly image abnormal synovium. The authors reviewed over 50 cases with abnormal synovial processes. The abnormalities include Baker cysts, semimembranous bursitis, chronic shoulder bursitis, peroneal tendon ganglion cyst, periarticular abscesses, thickened synovium from rheumatoid and septic arthritis, and synovial hypertrophy secondary to Legg-Calve-Perthes disease. MR imaging has proved invaluable in identifying abnormal synovium, defining the extent and, to a limited degree, characterizing its makeup

  18. Earth Observation Services (Image Processing Software)

    Science.gov (United States)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  19. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  20. Image processing with a cellular nonlinear network

    International Nuclear Information System (INIS)

    Morfu, S.

    2005-01-01

    A cellular nonlinear network (CNN) based on uncoupled nonlinear oscillators is proposed for image processing purposes. It is shown theoretically and numerically that the contrast of an image loaded at the nodes of the CNN is strongly enhanced, even if this one is initially weak. An image inversion can be also obtained without reconfiguration of the network whereas a gray levels extraction can be performed with an additional threshold filtering. Lastly, an electronic implementation of this CNN is presented

  1. ATV: Image display tool

    Science.gov (United States)

    Barth, Aaron J.; Schlegel, David; Finkbeiner, Doug; Colley, Wesley; Liu, Mike; Brauher, Jim; Cunningham, Nathaniel; Perrin, Marshall; Roe, Henry; Weaver, Hal

    2014-05-01

    ATV displays and analyses astronomical images using the IDL image-processing language. It allows interactive control of the image scaling, color table, color stretch, and zoom, with support for world coordinate systems. It also does point-and-click aperture photometry, simple spectral extractions, and can produce publication-quality postscript output images.

  2. Future-oriented maintenance strategy based on automated processes is finding its way into large astronomical facilities at remote observing sites

    Science.gov (United States)

    Silber, Armin; Gonzalez, Christian; Pino, Francisco; Escarate, Patricio; Gairing, Stefan

    2014-08-01

    With expanding sizes and increasing complexity of large astronomical observatories on remote observing sites, the call for an efficient and recourses saving maintenance concept becomes louder. The increasing number of subsystems on telescopes and instruments forces large observatories, like in industries, to rethink conventional maintenance strategies for reaching this demanding goal. The implementation of full-, or semi-automatic processes for standard service activities can help to keep the number of operating staff on an efficient level and to reduce significantly the consumption of valuable consumables or equipment. In this contribution we will demonstrate on the example of the 80 Cryogenic subsystems of the ALMA Front End instrument, how an implemented automatic service process increases the availability of spare parts and Line Replaceable Units. Furthermore how valuable staff recourses can be freed from continuous repetitive maintenance activities, to allow focusing more on system diagnostic tasks, troubleshooting and the interchanging of line replaceable units. The required service activities are decoupled from the day-to-day work, eliminating dependencies on workload peaks or logistic constrains. The automatic refurbishing processes running in parallel to the operational tasks with constant quality and without compromising the performance of the serviced system components. Consequentially that results in an efficiency increase, less down time and keeps the observing schedule on track. Automatic service processes in combination with proactive maintenance concepts are providing the necessary flexibility for the complex operational work structures of large observatories. The gained planning flexibility is allowing an optimization of operational procedures and sequences by considering the required cost efficiency.

  3. A gamma cammera image processing system

    International Nuclear Information System (INIS)

    Chen Weihua; Mei Jufang; Jiang Wenchuan; Guo Zhenxiang

    1987-01-01

    A microcomputer based gamma camera image processing system has been introduced. Comparing with other systems, the feature of this system is that an inexpensive microcomputer has been combined with specially developed hardware, such as, data acquisition controller, data processor and dynamic display controller, ect. Thus the process of picture processing has been speeded up and the function expense ratio of the system raised

  4. Digital Image Processing in Private Industry.

    Science.gov (United States)

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  5. Mapping spatial patterns with morphological image processing

    Science.gov (United States)

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham

    2006-01-01

    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  6. The Red Rectangle: An Astronomical Example of Mach Bands?

    Science.gov (United States)

    Brecher, K.

    2005-12-01

    Recently, the Hubble Space Telescope (HST) produced spectacular images of the "Red Rectangle". This appears to be a binary star system undergoing recurrent mass loss episodes. The image-processed HST photographs display distinctive diagonal lightness enhancements. Some of the visual appearance undoubtedly arises from actual variations in the luminosity distribution of the light of the nebula itself, i.e., due to limb brightening. Psychophysical enhancement similar to the Vasarely or pyramid effect also seems to be involved in the visual impression conveyed by the HST images. This effect is related to Mach bands (as well as to the Chevreul and Craik-O'Brien-Cornsweet effects). The effect can be produced by stacking concentric squares (or other geometrical figures such as rectangles or hexagons) of linearly increasing or decreasing size and lightness, one on top of another. We have constructed controllable Flash applets of this effect as part of the NSF supported "Project LITE: Light Inquiry Through Experiments". They can be found in the vision section of the LITE web site at http://lite.bu.edu. Mach band effects have previously been seen in medical x-ray images. Here we report for the first time the possibility that such effects play a role in the interpretation of astronomical images. Specifically, we examine to what extent the visual impressions of the Red Rectangle and other extended astronomical objects are purely physical (photometric) in origin and to what degree they are enhanced by psychophysical processes. To help assess the relative physical and psychophysical contributions to the perceived lightness effects, we have made use of a center-surround (Difference of Gaussians) filter we developed for MatLab. We conclude that local (lateral inhibition) and longer range human visual perception effects probably do contribute to the lightness features seen in astronomical objects like the Red Rectangle. Project LITE is supported by NSF Grant # DUE-0125992.

  7. Checking Fits With Digital Image Processing

    Science.gov (United States)

    Davis, R. M.; Geaslen, W. D.

    1988-01-01

    Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

  8. Dictionary of computer vision and image processing

    National Research Council Canada - National Science Library

    Fisher, R. B

    2014-01-01

    ... been identified for inclusion since the current edition was published. Revised to include an additional 1000 new terms to reflect current updates, which includes a significantly increased focus on image processing terms, as well as machine learning terms...

  9. Digital image processing in art conservation

    Czech Academy of Sciences Publication Activity Database

    Zitová, Barbara; Flusser, Jan

    č. 53 (2003), s. 44-45 ISSN 0926-4981 Institutional research plan: CEZ:AV0Z1075907 Keywords : art conservation * digital image processing * change detection Subject RIV: JD - Computer Applications, Robotics

  10. Atlas of Astronomical Discoveries

    CERN Document Server

    Schilling, Govert

    2011-01-01

    Four hundred years ago in Middelburg, in the Netherlands, the telescope was invented. The invention unleashed a revolution in the exploration of the universe. Galileo Galilei discovered mountains on the Moon, spots on the Sun, and moons around Jupiter. Christiaan Huygens saw details on Mars and rings around Saturn. William Herschel discovered a new planet and mapped binary stars and nebulae. Other astronomers determined the distances to stars, unraveled the structure of the Milky Way, and discovered the expansion of the universe. And, as telescopes became bigger and more powerful, astronomers delved deeper into the mysteries of the cosmos. In his Atlas of Astronomical Discoveries, astronomy journalist Govert Schilling tells the story of 400 years of telescopic astronomy. He looks at the 100 most important discoveries since the invention of the telescope. In his direct and accessible style, the author takes his readers on an exciting journey encompassing the highlights of four centuries of astronomy. Spectacul...

  11. Fragmentation measurement using image processing

    Directory of Open Access Journals (Sweden)

    Farhang Sereshki

    2016-12-01

    Full Text Available In this research, first of all, the existing problems in fragmentation measurement are reviewed for the sake of its fast and reliable evaluation. Then, the available methods used for evaluation of blast results are mentioned. The produced errors especially in recognizing the rock fragments in computer-aided methods, and also, the importance of determination of their sizes in the image analysis methods are described. After reviewing the previous work done, an algorithm is proposed for the automated determination of rock particles’ boundary in the Matlab software. This method can determinate automatically the particles boundary in the minimum time. The results of proposed method are compared with those of Split Desktop and GoldSize software in two automated and manual states. Comparing the curves extracted from different methods reveals that the proposed approach is accurately applicable in measuring the size distribution of laboratory samples, while the manual determination of boundaries in the conventional software is very time-consuming, and the results of automated netting of fragments are very different with the real value due to the error in separation of the objects.

  12. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo

    2014-07-01

    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  13. Image quality dependence on image processing software in ...

    African Journals Online (AJOL)

    Background. Image post-processing gives computed radiography (CR) a considerable advantage over film-screen systems. After digitisation of information from CR plates, data are routinely processed using manufacturer-specific software. Agfa CR readers use MUSICA software, and an upgrade with significantly different ...

  14. Early skin tumor detection from microscopic images through image processing

    International Nuclear Information System (INIS)

    Siddiqi, A.A.; Narejo, G.B.; Khan, A.M.

    2017-01-01

    The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing) allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface) that is generated for the algorithm makes the system user friendly. (author)

  15. Image processing in the digital tomosynthesis for pulmonary imaging

    International Nuclear Information System (INIS)

    Sone, S.; Kasuga, T.; Sakai, F.; Kawai, T.; Oguchi, K.; Hirano, H.; Li, F.; Kubo, K.; Honda, T.; Haniuda, M.; Takemura, K.; Hosoba, M.

    1995-01-01

    Digital tomosynthesis makes it possible to reconstruct multiple tomographs from digital data obtained during a single tomographic motion and permits digital processing, which adds a number of special advantages to the well-known advantages of conventional tomography. We performed digital tomosynthesis with a fluororadiographic TV unit with tomographic function which was capable of producing pulsed low- and high-energy X-rays alternately, and we studied digital image processing to improve the image clarity of the reconstructed tomographs. To identify the optimal parameters for processing image data by means of spatial frequency filtration we evaluated the spatial frequency distribution of image data in linear tomographs of the lung, and on the basis of the results of this study we developed several types of digital image processing to reduce tomographic blur and system noise, to improve visualisation of faint opacities, to reduce resistant tomographic blur as well as overall blur, and to generate low-noise bone images based on dual-energy subtraction tomosynthesis. (orig.)

  16. Early Skin Tumor Detection from Microscopic Images through Image Processing

    Directory of Open Access Journals (Sweden)

    AYESHA AMIR SIDDIQI

    2017-10-01

    Full Text Available The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface that is generated for the algorithm makes the system user friendly

  17. Elements of image processing in localization microscopy

    International Nuclear Information System (INIS)

    Rees, Eric J; Erdelyi, Miklos; Schierle, Gabriele S Kaminski; Kaminski, Clemens F; Knight, Alex

    2013-01-01

    Localization microscopy software generally contains three elements: a localization algorithm to determine fluorophore positions on a specimen, a quality control method to exclude imprecise localizations, and a visualization technique to reconstruct an image of the specimen. Such algorithms may be designed for either sparse or partially overlapping (dense) fluorescence image data, and making a suitable choice of software depends on whether an experiment calls for simplicity and resolution (favouring sparse methods), or for rapid data acquisition and time resolution (requiring dense methods). We discuss the factors involved in this choice. We provide a full set of MATLAB routines as a guide to localization image processing, and demonstrate the usefulness of image simulations as a guide to the potential artefacts that can arise when processing over-dense experimental fluorescence images with a sparse localization algorithm. (special issue article)

  18. Adaptive filters for color image processing

    Directory of Open Access Journals (Sweden)

    Papanikolaou V.

    1998-01-01

    Full Text Available The color filters that are used to attenuate noise are usually optimized to perform extremely well when dealing with certain noise distributions. Unfortunately it is often the case that the noise corrupting the image is not known. It is thus beneficial to know a priori the type of noise corrupting the image in order to select the optimal filter. A method of extracting and characterizing the noise within a digital color image using the generalized Gaussian probability density function (pdf (B.D. Jeffs and W.H. Pun, IEEE Transactions on Image Processing, 4(10, 1451–1456, 1995 and Proceedings of the Int. Conference on Image Processing, 465–468, 1996, is presented. In this paper simulation results are included to demonstrate the effectiveness of the proposed methodology.

  19. Adaptive filters for color image processing

    Directory of Open Access Journals (Sweden)

    V. Papanikolaou

    1999-01-01

    Full Text Available The color filters that are used to attenuate noise are usually optimized to perform extremely well when dealing with certain noise distributions. Unfortunately it is often the case that the noise corrupting the image is not known. It is thus beneficial to know a priori the type of noise corrupting the image in order to select the optimal filter. A method of extracting and characterizing the noise within a digital color image using the generalized Gaussian probability density function (pdf (B.D. Jeffs and W.H. Pun, IEEE Transactions on Image Processing, 4(10, 1451–1456, 1995 and Proceedings of the Int. Conference on Image Processing, 465–468, 1996, is presented. In this paper simulation results are included to demonstrate the effectiveness of the proposed methodology.

  20. Image processing system for flow pattern measurements

    International Nuclear Information System (INIS)

    Ushijima, Satoru; Miyanaga, Yoichi; Takeda, Hirofumi

    1989-01-01

    This paper describes the development and application of an image processing system for measurements of flow patterns occuring in natural circulation water flows. In this method, the motions of particles scattered in the flow are visualized by a laser light slit and they are recorded on normal video tapes. These image data are converted to digital data with an image processor and then transfered to a large computer. The center points and pathlines of the particle images are numerically analized, and velocity vectors are obtained with these results. In this image processing system, velocity vectors in a vertical plane are measured simultaneously, so that the two dimensional behaviors of various eddies, with low velocity and complicated flow patterns usually observed in natural circulation flows, can be determined almost quantitatively. The measured flow patterns, which were obtained from natural circulation flow experiments, agreed with photographs of the particle movements, and the validity of this measuring system was confirmed in this study. (author)

  1. Brain's tumor image processing using shearlet transform

    Science.gov (United States)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander

    2017-09-01

    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  2. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  3. Poznan acute Astronomical Observatory

    Science.gov (United States)

    Murdin, P.

    2000-11-01

    This Poznan acute Astronomical Observatory is a unit of the Adam Mickiewicz University, located in Poznan acute, Poland. From its foundation in 1919, it has specialized in astrometry and celestial mechanics (reference frames, dynamics of satellites and small solar system bodies). Recently, research activities have also included planetary and stellar astrophysics (asteroid photometry, catalysmic b...

  4. Astronomical Spectroscopy -16 ...

    Indian Academy of Sciences (India)

    led by the famous French astronomer, Pierre Janssen. Both were conducting pioneering spectroscopic observations of the eclipsed sun. They were particularly interested in the spectrum of the chromosphere, which flashes out at the beginning and the end of totality. They noticed many lines due to known elements,.

  5. Old Georgian Astronomical Manuscripts

    Science.gov (United States)

    Simonia, I.

    2004-12-01

    A general overview of Georgian astronomical manuscripts is given, and the contents of a few, dating from the 12th to the 19th centuries, are given. A partial translation and commentary of manuscript A883, entitled "Cosmos", and dating from the 18th century, is presented.

  6. CT image processing using digital networks

    International Nuclear Information System (INIS)

    Rhodes, M.L.

    1985-01-01

    Several digital image transmission networks have been proposed, studied, and measured for local-area medical applications. The image processing service described here uses a commercial digital network to connect the computers of CT scanners. The network service shares image processing tasks with remote sites during the times that the scanners are otherwise idle. The network is nationwide, emphasizes resource sharing, uses moderate-bandwidth (4800 or 9600 baud, i.e., 480 or 960 characters/sec) dedicated leased telephone lines, and is restricted at this time to only a few types of CT scanner. Furthermore, because it offers services that are not interactive, it is able to optimize computer resources without routine interruption from users. Discussion begins by a brief overview of classic computer network topologies, with an emphasis on their application to CT, magnetic resonance imaging (MRI), and other medical imaging modalities. This somewhat technical introduction to networks demonstrates the unique characteristics of medical image communication as opposed to the more common applications of computer communication in the banking, retail, and management information industries. In the sections that follow, a more selective focus is made on the topology, hardware, image-processing, and operational characteristics of a network that is now composed of over 50 CT scanner systems throughout the United States. The chapter concludes by summarizing the network performance during its first 35 months of operation

  7. Bistatic SAR: Signal Processing and Image Formation.

    Energy Technology Data Exchange (ETDEWEB)

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  8. JIP: Java image processing on the Internet

    Science.gov (United States)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  9. Powerful Radio Burst Indicates New Astronomical Phenomenon

    Science.gov (United States)

    2007-09-01

    Astronomers studying archival data from an Australian radio telescope have discovered a powerful, short-lived burst of radio waves that they say indicates an entirely new type of astronomical phenomenon. Region of Strong Radio Burst Visible-light (negative greyscale) and radio (contours) image of Small Magellanic Cloud and area where burst originated. CREDIT: Lorimer et al., NRAO/AUI/NSF Click on image for high-resolution file ( 114 KB) "This burst appears to have originated from the distant Universe and may have been produced by an exotic event such as the collision of two neutron stars or the death throes of an evaporating black hole," said Duncan Lorimer, Assistant Professor of Physics at West Virginia University (WVU) and the National Radio Astronomy Observatory (NRAO). The research team led by Lorimer consists of Matthew Bailes of Swinburne University in Australia, Maura McLaughlin of WVU and NRAO, David Narkevic of WVU, and Fronefield Crawford of Franklin and Marshall College in Lancaster, Pennsylvania. The astronomers announced their findings in the September 27 issue of the online journal Science Express. The startling discovery came as WVU undergraduate student David Narkevic re-analyzed data from observations of the Small Magellanic Cloud made by the 210-foot Parkes radio telescope in Australia. The data came from a survey of the Magellanic Clouds that included 480 hours of observations. "This survey had sought to discover new pulsars, and the data already had been searched for the type of pulsating signals they produce," Lorimer said. "We re-examined the data, looking for bursts that, unlike the usual ones from pulsars, are not periodic," he added. The survey had covered the Magellanic Clouds, a pair of small galaxies in orbit around our own Milky Way Galaxy. Some 200,000 light-years from Earth, the Magellanic Clouds are prominent features in the Southern sky. Ironically, the new discovery is not part of these galaxies, but rather is much more distant

  10. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  11. Fundamental Concepts of Digital Image Processing

    Science.gov (United States)

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  12. Fundamental concepts of digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  13. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  14. Image exploitation and dissemination prototype of distributed image processing

    International Nuclear Information System (INIS)

    Batool, N.; Huqqani, A.A.; Mahmood, A.

    2003-05-01

    Image processing applications requirements can be best met by using the distributed environment. This report presents to draw inferences by utilizing the existed LAN resources under the distributed computing environment using Java and web technology for extensive processing to make it truly system independent. Although the environment has been tested using image processing applications, its design and architecture is truly general and modular so that it can be used for other applications as well, which require distributed processing. Images originating from server are fed to the workers along with the desired operations to be performed on them. The Server distributes the task among the Workers who carry out the required operations and send back the results. This application has been implemented using the Remote Method Invocation (RMl) feature of Java. Java RMI allows an object running in one Java Virtual Machine (JVM) to invoke methods on another JVM thus providing remote communication between programs written in the Java programming language. RMI can therefore be used to develop distributed applications [1]. We undertook this project to gain a better understanding of distributed systems concepts and its uses for resource hungry jobs. The image processing application is developed under this environment

  15. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Digital-image processing and image analysis of glacier ice

    Science.gov (United States)

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  17. PCB Fault Detection Using Image Processing

    Science.gov (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.

    2017-08-01

    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  18. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  19. The Automated Astronomic Positioning System (AAPS)

    Science.gov (United States)

    Williams, O. N.

    1973-01-01

    Two prototype systems of The Automated Astronomic Positioning System (AAPS) have been delivered to Defense Mapping Agency (DMA). The AAPS was developed to automate and expedite the determination of astronomic positions (latitude and longitude). This equipment is capable of defining astronomic positions to an accuracy sigma = 0.3 in each component within a two hour span of stellar observations which are acquired automatically. The basic concept acquires observations by timing stellar images as they cross a series of slits, comparing these observations to a stored star catalogue, and automatically deducing position and accuracy by least squares using pre-set convergence criteria. An exhaustive DMA operational test program has been initiated to evaluate the capabilities of the AAPS in a variety of environments (both climatic and positional). Status of the operational test is discussed.

  20. Image gathering and processing - Information and fidelity

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.

    1985-01-01

    In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.

  1. Global image processing operations on parallel architectures

    Science.gov (United States)

    Webb, Jon A.

    1990-09-01

    Image processing operations fall into two classes: local and global. Local operations affect only a small corresponding area in the output image, and include edge detection, smoothing, and point operations. In global operations any input pixel can affect any or a large number of output data. Global operations include histogram, image warping, Hough transform, and connected components. Parallel architectures offer a promising method for speeding up these image processing operations. Local operations are easy to parallelize, because the input data can be divided among processors, processed in parallel separately, then the outputs can be combined by concatenation. Global operations are harder to parallelize. In fact, some global operations cannot be executed in parallel; it is possible for a global operation to require serial execution for correct computation of the result. However, an important class of global operations, namely those that are reversible-that can be computed in forward or reverse order on a data structure-can be computed in parallel using a restricted form of divide and conquer called split and merge. These reversible operations include the global operations mentioned above, and many more besides-even such non-image processing operations as parsing, string search, and sorting. The split and merge method will be illustrated by application of it to these algorithms. Performance analysis of the method on different architectures-one-dimensional, two-dimensional, and binary tree processor arrays will be demonstrated.

  2. The Infrared Astronomical Mission AKARI*

    Science.gov (United States)

    Murakami, Hiroshi; Baba, Hajime; Barthel, Peter; Clements, David L.; Cohen, Martin; Doi, Yasuo; Enya, Keigo; Figueredo, Elysandra; Fujishiro, Naofumi; Fujiwara, Hideaki; Fujiwara, Mikio; Garcia-Lario, Pedro; Goto, Tomotsugu; Hasegawa, Sunao; Hibi, Yasunori; Hirao, Takanori; Hiromoto, Norihisa; Hong, Seung Soo; Imai, Koji; Ishigaki, Miho; Ishiguro, Masateru; Ishihara, Daisuke; Ita, Yoshifusa; Jeong, Woong-Seob; Jeong, Kyung Sook; Kaneda, Hidehiro; Kataza, Hirokazu; Kawada, Mitsunobu; Kawai, Toshihide; Kawamura, Akiko; Kessler, Martin F.; Kester, Do; Kii, Tsuneo; Kim, Dong Chan; Kim, Woojung; Kobayashi, Hisato; Koo, Bon Chul; Kwon, Suk Minn; Lee, Hyung Mok; Lorente, Rosario; Makiuti, Sin'itirou; Matsuhara, Hideo; Matsumoto, Toshio; Matsuo, Hiroshi; Matsuura, Shuji; MÜller, Thomas G.; Murakami, Noriko; Nagata, Hirohisa; Nakagawa, Takao; Naoi, Takahiro; Narita, Masanao; Noda, Manabu; Oh, Sang Hoon; Ohnishi, Akira; Ohyama, Youichi; Okada, Yoko; Okuda, Haruyuki; Oliver, Sebastian; Onaka, Takashi; Ootsubo, Takafumi; Oyabu, Shinki; Pak, Soojong; Park, Yong-Sun; Pearson, Chris P.; Rowan-Robinson, Michael; Saito, Toshinobu; Sakon, Itsuki; Salama, Alberto; Sato, Shinji; Savage, Richard S.; Serjeant, Stephen; Shibai, Hiroshi; Shirahata, Mai; Sohn, Jungjoo; Suzuki, Toyoaki; Takagi, Toshinobu; Takahashi, Hidenori; TanabÉ, Toshihiko; Takeuchi, Tsutomu T.; Takita, Satoshi; Thomson, Matthew; Uemizu, Kazunori; Ueno, Munetaka; Usui, Fumihiko; Verdugo, Eva; Wada, Takehiko; Wang, Lingyu; Watabe, Toyoki; Watarai, Hidenori; White, Glenn J.; Yamamura, Issei; Yamauchi, Chisato; Yasuda, Akiko

    2007-10-01

    AKARI, the first Japanese satellite dedicated to infrared astronomy, was launched on 2006 February 21, and started observations in May of the same year. AKARI has a 68.5 cm cooled telescope, together with two focal-plane instruments, which survey the sky in six wavelength bands from the mid- to far-infrared. The instruments also have the capability for imaging and spectroscopy in the wavelength range 2 - 180 micron in the pointed observation mode, occasionally inserted into the continuous survey operation. The in-orbit cryogen lifetime is expected to be one and a half years. The All-Sky Survey will cover more than 90 percent of the whole sky with higher spatial resolution and wider wavelength coverage than that of the previous IRAS all-sky survey. Point source catalogues of the All-Sky Survey will be released to the astronomical community. The pointed observations will be used for deep surveys of selected sky areas and systematic observations of important astronomical targets. These will become an additional future heritage of this mission.

  3. Astronomical Signatures of Dark Matter

    Directory of Open Access Journals (Sweden)

    Paul Gorenstein

    2014-01-01

    Full Text Available Several independent astronomical observations in different wavelength bands reveal the existence of much larger quantities of matter than what we would deduce from assuming a solar mass to light ratio. They are very high velocities of individual galaxies within clusters of galaxies, higher than expected rotation rates of stars in the outer regions of galaxies, 21 cm line studies indicative of increasing mass to light ratios with radius in the halos of spiral galaxies, hot gaseous X-ray emitting halos around many elliptical galaxies, and clusters of galaxies requiring a much larger component of unseen mass for the hot gas to be bound. The level of gravitational attraction needed for the spatial distribution of galaxies to evolve from the small perturbations implied by the very slightly anisotropic cosmic microwave background radiation to its current web-like configuration requires much more mass than is observed across the entire electromagnetic spectrum. Distorted shapes of galaxies and other features created by gravitational lensing in the images of many astronomical objects require an amount of dark matter consistent with other estimates. The unambiguous detection of dark matter and more recently evidence for dark energy has positioned astronomy at the frontier of fundamental physics as it was in the 17th century.

  4. Support Routines for In Situ Image Processing

    Science.gov (United States)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  5. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  6. Contour Stencils and Variational Image Processing

    Science.gov (United States)

    Getreuer, Pascal Tom

    The first part of this thesis is on contour stencils, a new method for edge adaptive image processing. We focus particularly on image zooming, which is the problem of increasing the resolution of a given image. An important aspect of zooming is accurate estimation of edge orientations. Contour stencils is a new method for estimating the image contours based on total variation along curves. Contour stencils are applied in designing several edge-adaptive color zooming methods. These zooming methods fall at different points in the balance between speed and quality. One of these zooming methods, contour stencil windowed zooming, is particular successful. Although most zooming methods require either solving a large linear system or running many iterations, this method has linear complexity in the number of pixels and can be computed in a single pass through the image. The zoomed image is constructed as a function that may be sampled anywhere, enabling arbitrary resampling operations. Comparisons show that contour stencil zooming methods are competitive with existing methods. Applications of contour stencils to corner detection and image enhancement are also illustrated. The second part of this thesis is on topics in variational image processing. First, we apply variational techniques to formulate a total variation optimal prediction in Harten multiresolution schemes. We show that this prediction is well-defined, construct a Harten multiresolution using this prediction, and show that a modified encoding strategy is possible for approximation using the scheme. We also investigate the efficient numerical solution of the prediction and compare several different algorithms. Examples show that image approximation with this scheme is competitive with the CDF 9/7 wavelet. Next, we investigate nonconvex potentials in variational image problems. For the approximate solution of these nonconvex problems, we develop a particle swarm optimization like algorithm that avoids becoming

  7. Astronomers as Software Developers

    Science.gov (United States)

    Pildis, Rachel A.

    2016-01-01

    Astronomers know that their research requires writing, adapting, and documenting computer software. Furthermore, they often have to learn new computer languages and figure out how existing programs work without much documentation or guidance and with extreme time pressure. These are all skills that can lead to a software development job, but recruiters and employers probably won't know that. I will discuss all the highly useful experience that astronomers may not know that they already have, and how to explain that knowledge to others when looking for non-academic software positions. I will also talk about some of the pitfalls I have run into while interviewing for jobs and working as a developer, and encourage you to embrace the curiosity employers might have about your non-standard background.

  8. Enthusiastic Little Astronomers

    Science.gov (United States)

    Novak, Ines

    2016-04-01

    Younger primary school students often show great interest in the vast Universe hiding behind the starry night's sky, but don't have a way of learning about it and exploring it in regular classes. Some of them would search children's books, Internet or encyclopedias for information or facts they are interested in, but there are those whose hunger for knowledge would go unfulfilled. Such students were the real initiators of our extracurricular activity called Little Astronomers. With great enthusiasm they would name everything that interests them about the Universe that we live in and I would provide the information in a fun and interactive yet acceptable way for their level of understanding. In our class we learn about Earth and its place in the Solar System, we learn about the planets and other objects of our Solar System and about the Sun itself. We also explore the night sky using programs such as Stellarium, learning to recognize constellations and name them. Most of our activities are done using a PowerPoint presentation, YouTube videos, and Internet simulations followed by some practical work the students do themselves. Because of the lack of available materials and funds, most of materials are hand made by the teacher leading the class. We also use the school's galileoscope as often as possible. Every year the students are given the opportunity to go to an observatory in a town 90 km away so that they could gaze at the sky through the real telescope for the first time. Our goal is to start stepping into the world of astronomy by exploring the secrets of the Universe and understanding the process of rotation and revolution of our planet and its effects on our everyday lives and also to become more aware of our own role in our part of the Universe. The hunger for knowledge and enthusiasm these students have is contagious. They are becoming more aware of their surroundings and also understanding their place in the Universe that helps them remain humble and helps

  9. South African Astronomical Observatory

    International Nuclear Information System (INIS)

    1987-01-01

    Work at the South African Astronomical Observatory (SAAO) in recent years, by both staff and visitors, has made major contributions to the fields of astrophysics and astronomy. During 1986 the SAAO has been involved in studies of the following: galaxies; celestial x-ray sources; magellanic clouds; pulsating variables; galactic structure; binary star phenomena; nebulae and interstellar matter; stellar astrophysics; open clusters; globular clusters, and solar systems

  10. Astronomical Research Using Virtual Observatories

    Directory of Open Access Journals (Sweden)

    M Tanaka

    2010-01-01

    Full Text Available The Virtual Observatory (VO for Astronomy is a framework that empowers astronomical research by providing standard methods to find, access, and utilize astronomical data archives distributed around the world. VO projects in the world have been strenuously developing VO software tools and/or portal systems. Interoperability among VO projects has been achieved with the VO standard protocols defined by the International Virtual Observatory Alliance (IVOA. As a result, VO technologies are now used in obtaining astronomical research results from a huge amount of data. We describe typical examples of astronomical research enabled by the astronomical VO, and describe how the VO technologies are used in the research.

  11. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  12. Processing Images of Craters for Spacecraft Navigation

    Science.gov (United States)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  13. Conceptualization, Cognitive Process between Image and Word

    Directory of Open Access Journals (Sweden)

    Aurel Ion Clinciu

    2009-12-01

    Full Text Available The study explores the process of constituting and organizing the system of concepts. After a comparative analysis of image and concept, conceptualization is reconsidered through raising for discussion the relations of concept with image in general and with self-image mirrored in body schema in particular. Taking into consideration the notion of mental space, there is developed an articulated perspective on conceptualization which has the images of mental space at one pole and the categories of language and operations of thinking at the other pole. There are explored the explicative possibilities of the notion of Tversky’s diagrammatic space as an element which is necessary to understand the genesis of graphic behaviour and to define a new construct, graphic intelligence.

  14. Feedback regulation of microscopes by image processing.

    Science.gov (United States)

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system. © 2013 The Authors Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  15. Intensity-dependent point spread image processing

    International Nuclear Information System (INIS)

    Cornsweet, T.N.; Yellott, J.I.

    1984-01-01

    There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction

  16. Old Star's "Rebirth" Gives Astronomers Surprises

    Science.gov (United States)

    2005-04-01

    Astronomers using the National Science Foundation's Very Large Array (VLA) radio telescope are taking advantage of a once-in-a-lifetime opportunity to watch an old star suddenly stir back into new activity after coming to the end of its normal life. Their surprising results have forced them to change their ideas of how such an old, white dwarf star can re-ignite its nuclear furnace for one final blast of energy. Sakurai's Object Radio/Optical Images of Sakurai's Object: Color image shows nebula ejected thousands of years ago. Contours indicate radio emission. Inset is Hubble Space Telescope image, with contours indicating radio emission; this inset shows just the central part of the region. CREDIT: Hajduk et al., NRAO/AUI/NSF, ESO, StSci, NASA Computer simulations had predicted a series of events that would follow such a re-ignition of fusion reactions, but the star didn't follow the script -- events moved 100 times more quickly than the simulations predicted. "We've now produced a new theoretical model of how this process works, and the VLA observations have provided the first evidence supporting our new model," said Albert Zijlstra, of the University of Manchester in the United Kingdom. Zijlstra and his colleagues presented their findings in the April 8 issue of the journal Science. The astronomers studied a star known as V4334 Sgr, in the constellation Sagittarius. It is better known as "Sakurai's Object," after Japanese amateur astronomer Yukio Sakurai, who discovered it on February 20, 1996, when it suddenly burst into new brightness. At first, astronomers thought the outburst was a common nova explosion, but further study showed that Sakurai's Object was anything but common. The star is an old white dwarf that had run out of hydrogen fuel for nuclear fusion reactions in its core. Astronomers believe that some such stars can undergo a final burst of fusion in a shell of helium that surrounds a core of heavier nuclei such as carbon and oxygen. However, the

  17. Proximity Glare Suppression for Astronomical Coronagraphy, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — There is a critical need for stray light suppression in advanced astronomical telescopes and imaging systems. For optical instruments that are required to view...

  18. Speckle pattern processing by digital image correlation

    Directory of Open Access Journals (Sweden)

    Gubarev Fedor

    2016-01-01

    Full Text Available Testing the method of speckle pattern processing based on the digital image correlation is carried out in the current work. Three the most widely used formulas of the correlation coefficient are tested. To determine the accuracy of the speckle pattern processing, test speckle patterns with known displacement are used. The optimal size of a speckle pattern template used for determination of correlation and corresponding the speckle pattern displacement is also considered in the work.

  19. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various materials. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  20. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various samples. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  1. Image Processing in Amateur Astro-Photography

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 2. Image Processing in Amateur Astro-Photography. Anurag Garg. Classroom Volume 15 Issue 2 February 2010 pp 170-175. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/015/02/0170-0175 ...

  2. Data processing of imperfectly coded images

    International Nuclear Information System (INIS)

    Hammersley, A.P.; Skinner, G.K.

    1984-01-01

    The theory of mask coding is well established for perfect coding systems, but imperfections in practical situations produce new data processing problems. The Spacelab 2 telescopes are fully coded systems, but some complications arise as parts of the detector are obscured by a strengthening cross. The effects of this sort of obscuration on image quality and ways of handling them will be discussed. (orig.)

  3. Optimisation in signal and image processing

    CERN Document Server

    Siarry, Patrick

    2010-01-01

    This book describes the optimization methods most commonly encountered in signal and image processing: artificial evolution and Parisian approach; wavelets and fractals; information criteria; training and quadratic programming; Bayesian formalism; probabilistic modeling; Markovian approach; hidden Markov models; and metaheuristics (genetic algorithms, ant colony algorithms, cross-entropy, particle swarm optimization, estimation of distribution algorithms, and artificial immune systems).

  4. Process for making lyophilized radiographic imaging kit

    International Nuclear Information System (INIS)

    Grogg, T.W.; Bates, P.E.; Bugaj, J.E.

    1985-01-01

    A process for making a lyophilized composition useful for skeletal imaging whereby an aqueous solution containing an ascorbate, gentisate, or reductate stabilizer is contacted with tin metal or an alloy containing tin and, thereafter, lyophilized. Preferably, such compositions also comprise a tissue-specific carrier and a stannous compound. It is particularly preferred to incorporate stannous oxide as a coating on the tin metal

  5. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  6. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  7. Image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Manninen, H.; Partanen, K.; Lehtovirta, J.; Matsi, P.; Soimakallio, S.

    1992-01-01

    The usefulness of digital image processing of chest radiographs was evaluated in a clinical study. In 54 patients, chest radiographs in the posteroanterior projection were obtained by both 14 inch digital image intensifier equipment and the conventional screen-film technique. The digital radiographs (512x512 image format) viewed on a 625 line monitor were processed in 3 different ways: 1.standard display; 2.digital edge enhancement for the standard display; 3.inverse intensity display. The radiographs were interpreted independently by 3 radiologists. Diagnoses were confirmed by CT, follow-up radiographs and clinical records. Chest abnormalities of the films analyzed included 21 primary lung tumors, 44 pulmonary nodules, 16 cases with mediastinal disease, 17 with pneumonia /atelectasis. Interstitial lung disease, pleural plaques, and pulmonary emphysema were found in 30, 18 and 19 cases respectively. Sensitivity of conventional radiography when averaged overall findings was better than that of digital techniques (P<0.001). Differences in diagnostic accuracy measured by sensitivity and specificity between the 3 digital display modes were small. Standard image display showed better sensitivity for pulmonary nodules (0.74 vs 0.66; P<0.05) but poorer specificity for pulmonary emphysema (0.85 vs 0.93; P<0.05) compared with inverse intensity display. It is concluded that when using 512x512 image format, the routine use of digital edge enhancement and tone reversal at digital chest radiographs is not warranted. (author). 12 refs.; 4 figs.; 2 tabs

  8. A fuzzy art neural network based color image processing and ...

    African Journals Online (AJOL)

    A fuzzy art neural network based color image processing and recognition scheme. ... color image pixels, which enables a Fuzzy ART neural network to process the RGB color images. The application of the algorithm was implemented and tested on a set of RGB color face images. Keywords: Color image processing, RGB, ...

  9. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  10. Subband/transform functions for image processing

    Science.gov (United States)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  11. [Digital thoracic radiology: devices, image processing, limits].

    Science.gov (United States)

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  12. ARMA processing for NDE ultrasonic imaging

    International Nuclear Information System (INIS)

    Pao, Y.H.; El-Sherbini, A.

    1984-01-01

    This chapter describes a new method for acoustic image reconstruction for an active multiple sensor system operating in the reflection mode in the Fresnel region. The method is based on the use of an ARMA model for the reconstruction process. Algorithms for estimating the model parameters are presented and computer simulation results are shown. The AR coefficients are obtained independently of the MA coefficients. It is shown that when the ARMA reconstruction method is augmented with the multifrequency approach, it can provide a three-dimensional reconstructed image with high lateral and range resolutions, high signal to noise ratio and reduced sidelobe levels. The proposed ARMA reconstruction method results in high quality images and better performance than that obtainable with conventional methods. The advantages of the method are very high lateral resolution with a limited number of sensors, reduced sidelobes level, and high signal to noise ratio

  13. Astronomical Instruments in India

    Science.gov (United States)

    Sarma, Sreeramula Rajeswara

    The earliest astronomical instruments used in India were the gnomon and the water clock. In the early seventh century, Brahmagupta described ten types of instruments, which were adopted by all subsequent writers with minor modifications. Contact with Islamic astronomy in the second millennium AD led to a radical change. Sanskrit texts began to lay emphasis on the importance of observational instruments. Exclusive texts on instruments were composed. Islamic instruments like the astrolabe were adopted and some new types of instruments were developed. Production and use of these traditional instruments continued, along with the cultivation of traditional astronomy, up to the end of the nineteenth century.

  14. The Biographical Encyclopedia of Astronomers

    CERN Document Server

    Hockey, Thomas; Williams, Thomas R; Bracher, Katherine; Jarrell, Richard A; Marché, Jordan D; Ragep, F. Jamil; Palmeri, JoAnn; Bolt, Marvin

    2007-01-01

    The Biographical Encyclopedia of Astronomers is a unique and valuable resource for historians and astronomers alike. The two volumes include approximately 1550 biographical sketches on astronomers from antiquity to modern times. It is the collective work of about 400 authors edited by an editorial board of 9 historians and astronomers, and provides additional details on the nature of an entry and some summary statistics on the content of entries. This new reference provides biographical information on astronomers and cosmologists by utilizing contemporary historical scholarship. Individual entries vary from 100 to 1500 words, including the likes of the superluminaries such as Newton and Einstein, as well as lesser-known astronomers like Galileo's acolyte, Mario Guiducci. A comprehensive contributor index helps researchers to identify the authors of important scientific topics and treatises.

  15. Digital signal and image processing using MATLAB

    CERN Document Server

    Blanchet, Gérard

    2006-01-01

    This title provides the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications.More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.

  16. Novel image processing approach to detect malaria

    Science.gov (United States)

    Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2015-09-01

    In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.

  17. Color Image Processing and Object Tracking System

    Science.gov (United States)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  18. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  19. Digital image processing for thermal observation system

    Science.gov (United States)

    Yu, Wee K.; Song, In Seob; Yoon, Eon S.; Lee, Y. S.; Moon, M. G.; Hong, Seok-Min; Kim, J. K.

    1995-05-01

    This paper describes the digital image processing techniques of a thermal observation system, which is a serial/parallel scan and standard TV display type using a SPRITE (Signal PRocessing In The Element) detector. The designed digital electronics has two major signal processing stages: a high speed digital scan converter and an autoregressive (AR) filter. The digital scan converter is designed with analog-to-digital converter (ADC) and dual port RAM that can carry out reading and writing simultaneously, thus enabling compact scan conversion. The scan converter reformats the five parallel analog signals generated from the detector elements into serial digital signals compatible with RS-170 video rate. For the improvement of signal-to- noise ratio and compensation for the gamma effect of the monitor, we have implemented a real time 1st order AR filter that adopts frame averaging method. With the look-up-table (LUT) ROM that contains the frame averaging factors and the gamma coefficients, this digital filter performs the noise reduction and the gamma correction at the same time. This digital image processor has been proven to provide excellent image quality and superior detection capability for distant targets at night time.

  20. Phase Superposition Processing for Ultrasonic Imaging

    Science.gov (United States)

    Tao, L.; Ma, X. R.; Tian, H.; Guo, Z. X.

    1996-06-01

    In order to improve the resolution of defect reconstruction for non-destructive evaluation, a new phase superposition processing (PSP) method has been developed on the basis of a synthetic aperture focusing technique (SAFT). The proposed method synthesizes the magnitudes of phase-superposed delayed signal groups. A satisfactory image can be obtained by a simple algorithm processing time domain radio frequency signals directly. In this paper, the theory of PSP is introduced and some simulation and experimental results illustrating the advantage of PSP are given.

  1. Grigor Narekatsi's astronomical insights

    Science.gov (United States)

    Poghosyan, Samvel

    2015-07-01

    What stand out in the solid system of Gr. Narekatsi's naturalistic views are his astronomical insights on the material nature of light, its high speed and the Sun being composed of "material air". Especially surprising and fascinating are his views on stars and their clusters. What astronomers, including great Armenian academician V. Ambartsumian (scattering of stellar associations), would understand and prove with much difficulty thousand years later, Narekatsi predicted in the 10th century: "Stars appear and disappear untimely", "You who gather and scatter the speechless constellations, like a flock of sheep". Gr. Narekatsti's reformative views were manifested in all the spheres of the 10th century social life; he is a reformer of church life, great language constructor, innovator in literature and music, freethinker in philosophy and science. His ideology is the reflection of the 10th century Armenian Renaissance. During the 9th-10th centuries, great masses of Armenians, forced to migrate to the Balkans, took with them and spread reformative ideas. The forefather of the western science, which originated in the period of Reformation, is considered to be the great philosopher Nicholas of Cusa. The study of Gr. Narekatsti's logic and naturalistic views enables us to claim that Gr. Narekatsti is the great grandfather of European science.

  2. Getting Astronomers Involved in the IYA: Astronomer in the Classroom

    Science.gov (United States)

    Koenig, Kris

    2008-05-01

    The Astronomer in the Classroom program provides professional astronomers the opportunity to engage with 3rd-12th grade students across the nation in grade appropriate discussions of their recent research, and provides students with rich STEM content in a personalized forum, bringing greater access to scientific knowledge for underserved populations. 21st Century Learning and Interstellar Studios, the producer of the 400 Years of the Telescope documentary along with their educational partners, will provide the resources necessary to facilitate the Astronomer in the Classroom program, allowing students to interact with astronomers throughout the IYA2009. PROGRAM DESCRIPTION One of hundreds of astronomers will be available to interact with students via live webcast daily during Spring/Fall 2009. The astronomer for the day will conduct three 20-minute discussions (Grades 3-5 /6-8/9-12), beginning with a five-minute PowerPoint on their research or area of interest. The discussion will be followed by a question and answer period. The students will participate in real-time from their school computer(s) with the technology provided by 21st Century Learning. They will see and hear the astronomer on their screen, and pose questions from their keyboard. Teachers will choose from three daily sessions; 11:30 a.m., 12:00 p.m., 12:30 p.m. Eastern Time. This schedule overlaps all US time zones, and marginalizes bandwidth usage, preventing technological barriers to web access. The educational partners and astronomers will post materials online, providing easy access to information that will prepare teachers and students for the chosen discussion. The astronomers, invited to participate from the AAS and IAU, will receive a web cam shipment with instructions, a brief training and conductivity test, and prepaid postage for shipment of the web cam to the next astronomer on the list. The anticipated astronomer time required is 3-hours, not including the time to develop the PowerPoint.

  3. International astronomical remote present observation on IRC.

    Science.gov (United States)

    Ji, Kaifan; Cao, Wenda; Song, Qian

    On March 6 - 7, 1997, an international astronomical remote present observation (RPO) was made on Internet Relay Chat (IRC) for the first time. Seven groups in four countries, China, United States, Canada and Great Britain, used the 1 meter telescope of Yunnan observatory together by the way of remote present observation. Within minutes, images were "On-line" by FTP, and every one was able to get them by anonymous ftp and discuss them on IRC from different widely separated sites.

  4. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)

    2016-10-15

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  5. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    International Nuclear Information System (INIS)

    Sensakovic, William F.; O'Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-01-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA 2 by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing

  6. Aspects of optical and digital processing of scintigraphic images

    International Nuclear Information System (INIS)

    Platzer, H.; Wahl, F.; Hofer, J.; Galosi, H.; Langhammer, H.

    1981-01-01

    The Anger camera, which is able to represent three-dimensional radioactivity distributions as two-dimensional projections, has become a standard tool in diagnostic nuclear medicine. The two-dimensional projections are reviewed under the aspects of imaging and image processing. Coherent optical image processing and digital image processing are discussed in particular. (WU) [de

  7. MATLAB-Based Applications for Image Processing and Image Quality Assessment – Part I: Software Description

    Directory of Open Access Journals (Sweden)

    L. Krasula

    2011-12-01

    Full Text Available This paper describes several MATLAB-based applications useful for image processing and image quality assessment. The Image Processing Application helps user to easily modify images, the Image Quality Adjustment Application enables to create series of pictures with different quality. The Image Quality Assessment Application contains objective full reference quality metrics that can be used for image quality assessment. The Image Quality Evaluation Applications represent an easy way to compare subjectively the quality of distorted images with reference image. Results of these subjective tests can be processed by using the Results Processing Application. All applications provide Graphical User Interface (GUI for the intuitive usage.

  8. An image processing system for digital chest X-ray images

    International Nuclear Information System (INIS)

    Cocklin, M.; Gourlay, A.; Jackson, P.; Kaye, G.; Miessler, M.; Kerr, I.; Lams, P.

    1984-01-01

    This paper investigates the requirements for image processing of digital chest X-ray images. These images are conventionally recorded on film and are characterised by large size, wide dynamic range and high resolution. X-ray detection systems are now becoming available for capturing these images directly in photoelectronic-digital form. The hardware and software facilities required for handling these images are described. These facilities include high resolution digital image displays, programmable video look up tables, image stores for image capture and processing and a full range of software tools for image manipulation. Examples are given of the applications of digital image processing techniques to this class of image. (Auth.)

  9. Facial Edema Evaluation Using Digital Image Processing

    Directory of Open Access Journals (Sweden)

    A. E. Villafuerte-Nuñez

    2013-01-01

    Full Text Available The main objective of the facial edema evaluation is providing the needed information to determine the effectiveness of the anti-inflammatory drugs in development. This paper presents a system that measures the four main variables present in facial edemas: trismus, blush (coloration, temperature, and inflammation. Measurements are obtained by using image processing and the combination of different devices such as a projector, a PC, a digital camera, a thermographic camera, and a cephalostat. Data analysis and processing are performed using MATLAB. Facial inflammation is measured by comparing three-dimensional reconstructions of inflammatory variations using the fringe projection technique. Trismus is measured by converting pixels to centimeters in a digitally obtained image of an open mouth. Blushing changes are measured by obtaining and comparing the RGB histograms from facial edema images at different times. Finally, temperature changes are measured using a thermographic camera. Some tests using controlled measurements of every variable are presented in this paper. The results allow evaluating the measurement system before its use in a real test, using the pain model approved by the US Food and Drug Administration (FDA, which consists in extracting the third molar to generate the facial edema.

  10. Vector processing enhancements for real-time image analysis

    International Nuclear Information System (INIS)

    Shoaf, S.

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  11. Imprecise Arithmetic for Low Power Image Processing

    DEFF Research Database (Denmark)

    Albicocco, Pietro; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2012-01-01

    Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, additio...... and multiplication, in an imprecise manner by simplifying the hardware implementation. With the proposed ”sloppy” operations, we obtain a reduction in delay, area and power dissipation, and the error introduced is still acceptable for applications such as image processing.......Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, addition...

  12. Development of the SOFIA Image Processing Tool

    Science.gov (United States)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  13. HYMOSS signal processing for pushbroom spectral imaging

    Science.gov (United States)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  14. A New Image Processing and GIS Package

    Science.gov (United States)

    Rickman, D.; Luvall, J. C.; Cheng, T.

    1998-01-01

    The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

  15. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  16. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement

    Science.gov (United States)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.

    2017-11-01

    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  17. ISO Results Presented at International Astronomical Union

    Science.gov (United States)

    1997-08-01

    Some of the work being presented is collected in the attached ESA Information Note N 25-97, ISO illuminates our cosmic ancestry. A set of six colour images illustrating various aspects have also been released and are available at http://www.estec.esa.nl/spdwww/iso1808.htm or in hard copy from ESA Public Relations Paris (fax:+33.1.5369.7690). These pictures cover: 1. Distant but powerful infrared galaxies 2. A scan across the milky way 3. Helix nebula: the shroud of a dead star 4. Supernova remnant Cassiopeia A 5. Trifid nebula: a dusty birthplace of stars 6. Precursors of stars and planets The International Astronomical Union provides a forum where astronomers from all over the world can develop astronomy in all its aspects through international co-operation. General Assemblies are held every three years. It is expected that over 1600 astronomers will attend this year's meeting, which is being held in Kyoto, Japan from 18-30 August. Further information on the meeting can be found at: www.tenmon.or.jp/iau97/ . ISO illuminates our cosmic ancestry The European Space Agency's Infrared Space Observatory, ISO, is unmatched in its ability to explore and analyse many of the universal processes that made our existence possible. We are children of the stars. Every atom in our bodies was created in cosmic space and delivered to the Sun's vicinity in time for the Earth's formation, during a ceaseless cycle of birth, death and rebirth among the stars. The most creative places in the sky are cool and dusty, and opaque even to the Hubble Space Telescope. Infrared rays penetrating the dust reveal to ISO hidden objects, and the atoms and molecules of cosmic chemistry. "ISO is reading Nature's recipe book," says Roger Bonnet, ESA's director of science. "As the world's only telescope capable of observing the Universe over a wide range of infrared wavelengths, ISO plays an indispensable part in astronomical discoveries that help to explain how we came to exist." This Information Note

  18. Digital signal and image processing using Matlab

    CERN Document Server

    Blanchet , Gérard

    2015-01-01

    The most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals, the theory being supported by exercises and computer simulations relating to real applications.   More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.  Following on from the first volume, this second installation takes a more practical stance, provi

  19. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  20. Digital signal and image processing using MATLAB

    CERN Document Server

    Blanchet , Gérard

    2014-01-01

    This fully revised and updated second edition presents the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications. More than 200 programs and functions are provided in the MATLABÒ language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject. This fully revised new edition updates : - the

  1. Exploring Verbal, Visual and Schematic Learners' Static and Dynamic Mental Images of Scientific Species and Processes in Relation to Their Spatial Ability

    Science.gov (United States)

    Al-Balushi, Sulaiman M.; Coll, Richard Kevin

    2013-02-01

    The current study compared different learners' static and dynamic mental images of unseen scientific species and processes in relation to their spatial ability. Learners were classified into verbal, visual and schematic. Dynamic images were classified into: appearing/disappearing, linear-movement, and rotation. Two types of scientific entities and their related processes were investigated: astronomical and microscopic. The sample included 79 female students from Grades 9 and 10. For the purpose of the study, three instruments were used. The Mental Images by Guided Imagery instrument was designed to investigate participants' visualization of static and dynamic mental images. The Water-Level Task was adopted to estimate participants' spatial ability. The Learning Styles Inventory was used to classify participants into verbal, visual and schematic learners. The research findings suggest that schematic learners outperformed verbal and visual learners in their spatial ability. They also outperformed them in their vividness of microscopic images; both micro-static and micro-dynamic images; especially in the case of appearing/disappearing images. The differences were not significant in the case of astronomical images. The results also indicate that appearing/disappearing images received the least vividness scores for all three types of learners.

  2. Image-processing facility for Sandia National Laboratories, Albuquerque

    International Nuclear Information System (INIS)

    Ghiglia, D.C.

    1981-06-01

    An image processing facility is being developed at Sandia National Laboratories, Albuquerque, to support a wide and continually changing variety of image processing, signal processing, and pattern recognition tasks. This report addresses the hardware and software capabilities, current and planned image processing activities, development philosophy, and some of the factors leading to the development of this facility

  3. Image processing in 60Co container inspection system

    International Nuclear Information System (INIS)

    Wu Zhifang; Zhou Liye; Wang Liqiang; Liu Ximing

    1999-01-01

    The authors analyzes the features of 60 Co container inspection image, the design of several special processing methods for container image and some normal processing methods for two-dimensional digital image, including gray enhancement, pseudo-enhancement, space filter, edge enhancement, geometry process, etc. It gives out the way to carry out the above mentioned process in Windows 95 or Win NT. It discusses some ways to improve the image processing speed on microcomputer and good results were obtained

  4. Image processing to optimize wave energy converters

    Science.gov (United States)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  5. Imaging fault zones using 3D seismic image processing techniques

    Science.gov (United States)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  6. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  7. A novel image processing procedure for thermographic image analysis.

    Science.gov (United States)

    Matteoli, Sara; Coppini, Davide; Corvi, Andrea

    2018-03-14

    The imaging procedure shown in this paper has been developed for processing thermographic images, measuring the ocular surface temperature (OST) and visualizing the ocular thermal maps in a fast, reliable, and reproducible way. The strength of this new method is that the measured OSTs do not depend on the ocular geometry; hence, it is possible to compare the ocular profiles belonging to the same subject (right and left eye) as well as to different populations. In this paper, the developed procedure is applied on two subjects' eyes: a healthy case and another affected by an ocular malignant lesion. However, the method has already been tested on a bigger group of subjects for clinical purpose. For demonstrating the potentiality of this method, both intra- and inter-examiner repeatability were investigated in terms of coefficients of repeatability (COR). All OST indices showed repeatability with small intra-examiner (%COR 0.06-0.80) and inter-examiner variability (%COR 0.03-0.94). Measured OSTs and thermal maps clearly showed the clinical condition of the eyes investigated. The subject with no ocular pathology had no significant difference (P value = 0.25) between the OSTs of the right and left eye. On the contrary, the eye affected by a malignant lesion was significantly warmer (P value < 0.0001) than the contralateral, where the lesion was located. This new procedure demonstrated its reliability; it is featured by simplicity, immediacy, modularity, and genericity. The latter point is extremely precious as thermography has been used, in the last decades, in different clinical applications. Graphical abstract Ocular thermography and normalization process.

  8. The data analysis facilities that astronomers want

    International Nuclear Information System (INIS)

    Disney, M.

    1985-01-01

    This paper discusses the need and importance of data analysis facilities and what astronomers ideally want. A brief survey is presented of what is available now and some of the main deficiencies and problems with today's systems are discussed. The main sources of astronomical data are presented incuding: optical photographic, optical TV/CCD, VLA, optical spectros, imaging x-ray satellite, and satellite planetary camera. Landmark discoveries are listed in a table, some of which include: our galaxy as an island, distance to stars, H-R diagram (stellar structure), size of our galaxy, and missing mass in clusters. The main problems at present are discussed including lack of coordinated effort and central planning, differences in hardware, and measuring performance

  9. TPCs in high-energy astronomical polarimetry

    International Nuclear Information System (INIS)

    Black, J K

    2007-01-01

    High-energy astrophysics has yet to exploit the unique and important information that polarimetry could provide, largely due to the limited sensitivity of previously available polarimeters. In recent years, numerous efforts have been initiated to develop instruments with the sensitivity required for astronomical polarimetry over the 100 eV to 10 GeV band. Time projection chambers (TPCs), with their high-resolution event imaging capability, are an integral part of some of these efforts. After a brief overview of current astronomical polarimeter development efforts, the role of TPCs will be described in more detail. These include TPCs as photoelectric X-ray polarimeters and TPCs as components of polarizationsensitive Compton and pair-production telescopes

  10. Book Review: Scientific Writing for Young Astronomers

    Science.gov (United States)

    Uyttenhove, Jos

    2011-12-01

    EDP Sciences, Les Ulis, France. Part 1 : 162 pp. € 35 ISBN 978-2-7598-0506-8 Part 2 : 298 pp. € 60 ISBN 978-2-7598-0639-3 The journal Astronomy & Astrophysics (A&A) and EDP Sciences decided in 2007 to organize a School on the various aspects of scientific writing and publishing. In 2008 and 2009 Scientific Writing for Young Astronomers (SWYA) Schools were held in Blankenberge (B) under the direction of Christiaan Sterken (FWO-VUB). These two books (EAS publication series, Vol. 49 and 50) reflect the outcome of these Schools. Part 1 contains a set of contributions that discuss various aspects of scientific publication; it includes A&A Editors' view of the peer review and publishing process. A very interesting short paper by S.R. Pottasch (Kapteyn Astronomical Institute, Groningen, and one of the two first Editors-in Chief of A&A) deals with the history of the creation of the journal Astronomy & Astrophysics. Two papers by J. Adams et al. (Observatoire de Paris) discuss language editing, including a detailed guide for any non-native user of the English language. In 2002 the Board of Directors decided that all articles in A&A must be written in clear and correct English. Part 2 consists of three very extensive and elaborated papers by Christiaan Sterken, supplying guidelines to PhD students and postdoctoral fellows to help them compose scientific papers for different forums (journals, proceedings, thesis, etc.). This part is of interest not only for young astronomers but it is very useful for scholars of all ages and disciplines. Paper I "The writing process" (60 pp.) copes with the preparation of manuscripts, with communicating with editors and referees and with avoiding common errors. Delicate problems on authorship, refereeing, revising multi-authored papers etc. are treated in 26 FAQ's. Paper II "Communication by graphics" (120 pp.) is entirely dedicated to the important topic of communication with images, graphs, diagrams, tables etc. Design types of graphs

  11. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  12. Evaluation of breast cancer through mammographies and image digital processing

    International Nuclear Information System (INIS)

    Crestana, Rita H.S.

    1995-01-01

    The state of art of image processing has provided important advances to many scientific investigation areas particularly to medical own. This work exploits the potentiality of using image processing techniques for analyzing breast phantoms and mammographies. 36 refs., 62 figs

  13. Application of Java technology in radiation image processing

    International Nuclear Information System (INIS)

    Cheng Weifeng; Li Zheng; Chen Zhiqiang; Zhang Li; Gao Wenhuan

    2002-01-01

    The acquisition and processing of radiation image plays an important role in modern application of civil nuclear technology. The author analyzes the rationale of Java image processing technology which includes Java AWT, Java 2D and JAI. In order to demonstrate applicability of Java technology in field of image processing, examples of application of JAI technology in processing of radiation images of large container have been given

  14. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing

    2008-01-01

    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  15. Effects of image processing on the detective quantum efficiency

    Science.gov (United States)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  16. Image processing system for videotape review

    International Nuclear Information System (INIS)

    Bettendroffer, E.

    1988-01-01

    In a nuclear plant, the areas in which fissile materials are stored or handled, have to be monitored continuously. One method of surveillance is to record pictures of TV cameras with determined time intervals on special video recorders. The 'time lapse' recorded tape is played back at normal speed and an inspector checks visually the pictures. This method requires much manpower and an automated method would be useful. The present report describes an automatic reviewing method based on an image processing system; the system detects scene changes in the picture sequence and stores the reduced data set on a separate video tape. The resulting reduction of reviewing time by inspector is important for surveillance data with few movements

  17. Radar image processing for the AFIT anechoic chamber

    Science.gov (United States)

    Sanders, Brian K.

    1990-12-01

    The purpose of this study was to begin development of an Inverse Synthetic Aperature Radar imaging capability for the AFIT anechoic chamber. This began with an evaluation of the capabilities and limitations of the existing radar system and the chamber itself for this application. Then, after deciding on the image processing approach, software had to be written to collect the data necessary for image processing. This constituted the majority of this study, and resulted in a versatile, user-friendly program that automates the process of collecting data for high-resolution radar images. The program checks that the data to be collected will lead to a valid radar cross-section (RCS) image, but will allow data collection for general radar images. Finally, the image processing software was begun. This made use of commercially available software packages called PC-MATLAB and PRO-MATLAB. Further work is needed on the image processing software to generate calibrated images, and to perform focusing.

  18. The Expansion of the Astronomical Photographic Data Archive at PARI

    Science.gov (United States)

    Cline, J. Donald; Barker, Thurburn; Castelaz, Michael

    2017-01-01

    A diverse set of photometric, astrometric, spectral and surface brightness data exist on decades of photographic glass plates. The Astronomical Photographic Data Archive (APDA) at the Pisgah Astronomical Research Institute (PARI) was established in November 2007 and is dedicated to the task of collecting, restoring, preserving and storing astronomical photographic data and PARI continues to accept collections. APDA is also tasked with scanning each image and establishing a database of images that can be accessed via the Internet by the global community of scientists, researchers and students. APDA is a new type of astronomical observatory - one that harnesses analog data of the night sky taken for more than a century and making that data available in a digital format.In 2016, APDA expanded from 50 collections with about 220,000 plates to more than 55 collections and more than 340,000 plates and films. These account for more than 30% of all astronomical photographic data in the United States. The largest of the new acquisitions are the astronomical photographic plates in the Yale University collection. We present details of the newly added collections and review of other collections in APDA.

  19. Image processing and products for the Magellan mission to Venus

    Science.gov (United States)

    Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche

    1992-01-01

    The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.

  20. 3D Visualization of Astronomical Data with Blender

    Science.gov (United States)

    Kent, B. R.

    2015-09-01

    We present the innovative use of Blender, a 3D graphics package, for astronomical visualization. With a Python API and feature rich interface, Blender lends itself well to many 3D data visualization scenarios including data cube rendering, N-body simulations, catalog displays, and surface maps. We focus on the aspects of the software most useful to astronomers such as visual data exploration, applying data to Blender object constructs, and using graphics processing units (GPUs) for rendering. We share examples from both observational data and theoretical models to illustrate how the software can fit into an astronomer's toolkit.

  1. Astronomical Instrumentation System Markup Language

    Science.gov (United States)

    Goldbaum, Jesse M.

    2016-05-01

    The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.

  2. Using commercial amateur astronomical spectrographs

    CERN Document Server

    Hopkins, Jeffrey L

    2014-01-01

    Amateur astronomers interested in learning more about astronomical spectroscopy now have the guide they need. It provides detailed information about how to get started inexpensively with low-resolution spectroscopy, and then how to move on to more advanced  high-resolution spectroscopy. Uniquely, the instructions concentrate very much on the practical aspects of using commercially-available spectroscopes, rather than simply explaining how spectroscopes work. The book includes a clear explanation of the laboratory theory behind astronomical spectrographs, and goes on to extensively cover the practical application of astronomical spectroscopy in detail. Four popular and reasonably-priced commercially available diffraction grating spectrographs are used as examples. The first is a low-resolution transmission diffraction grating, the Star Analyser spectrograph. The second is an inexpensive fiber optic coupled bench spectrograph that can be used to learn more about spectroscopy. The third is a newcomer, the ALPY ...

  3. The South African astronomical observatory

    International Nuclear Information System (INIS)

    Feast, M.

    1985-01-01

    A few examples of the activities of the South African Astronomical Observatory are discussed. This includes the studying of stellar evolution, dust around stars, the determination of distances to galaxies and collaboration with space experiments

  4. Low cost 3D scanning process using digital image processing

    Science.gov (United States)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  5. Hyperspectral image representation and processing with binary partition trees

    OpenAIRE

    Valero Valbuena, Silvia

    2012-01-01

    Premi extraordinari doctorat curs 2011-2012, àmbit Enginyeria de les TIC The optimal exploitation of the information provided by hyperspectral images requires the development of advanced image processing tools. Therefore, under the title Hyperspectral image representation and Processing with Binary Partition Trees, this PhD thesis proposes the construction and the processing of a new region-based hierarchical hyperspectral image representation: the Binary Partition Tree (BPT). This hierarc...

  6. Sixteenth Century Astronomical Telescopy

    Science.gov (United States)

    Usher, P. D.

    2001-12-01

    Ophelia in Shakespeare's Hamlet is named for the ``moist star" which in mythology is the partner of Hamlet's royal Sun. Together the couple seem destined to rule on earth just as their celestial counterparts rule the heavens, but the tragedy is that they are afflicted, just as the Sun and Moon are blemished. In 1.3 Laertes lectures Ophelia on love and chastity, describing first Cytherean phases (crescent to gibbous) and then Lunar craters. Spots mar the Sun (1.1, 3.1). Also reported are Jupiter's Red Spot (3.4) and the resolution of the Milky Way into stars (2.2). These interpretations are well-founded and support the cosmic allegory. Observations must have been made with optical aid, probably the perspective glass of Leonard Digges, father of Thomas Digges. Notably absent from Hamlet is mention of the Galilean moons, owing perhaps to the narrow field-of-view of the telescope. That discovery is later celebrated in Cymbeline, published soon after Galileo's Siderius Nuncius in 1610. In 5.4 of Cymbeline the four ghosts dance ``in imitation of planetary motions" and at Jupiter's behest place a book on the chest of Posthumus Leonatus. His name identifies the Digges father and son as the source of data in Hamlet since Jupiter's moons were discovered after the deaths of Leonard (``leon+hart") and Thomas (the ``lion's whelp"). Lines in 5.4 urge us not to read more into the book than is contained between its covers; this is understandable because Hamlet had already reported the other data in support of heliocentricism and the cosmic model discussed and depicted by Thomas Digges in 1576. I conclude therefore that astronomical telescopy began in England before the last quarter of the sixteenth century.

  7. Astrobiology: An astronomer's perspective

    International Nuclear Information System (INIS)

    Bergin, Edwin A.

    2014-01-01

    In this review we explore aspects of the field of astrobiology from an astronomical viewpoint. We therefore focus on the origin of life in the context of planetary formation, with additional emphasis on tracing the most abundant volatile elements, C, H, O, and N that are used by life on Earth. We first explore the history of life on our planet and outline the current state of our knowledge regarding the delivery of the C, H, O, N elements to the Earth. We then discuss how astronomers track the gaseous and solid molecular carriers of these volatiles throughout the process of star and planet formation. It is now clear that the early stages of star formation fosters the creation of water and simple organic molecules with enrichments of heavy isotopes. These molecules are found as ice coatings on the solid materials that represent microscopic beginnings of terrestrial worlds. Based on the meteoritic and cometary record, the process of planet formation, and the local environment, lead to additional increases in organic complexity. The astronomical connections towards this stage are only now being directly made. Although the exact details are uncertain, it is likely that the birth process of star and planets likely leads to terrestrial worlds being born with abundant water and organics on the surface

  8. Spot restoration for GPR image post-processing

    Science.gov (United States)

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  9. The NPS Virtual Thermal Image Processing Model

    National Research Council Canada - National Science Library

    Lenter, Yucel

    2001-01-01

    ...). The MRTD is a standard performance measure for forward-looking infrared (FLIR) imaging systems. It takes into account thermal imaging system modeling concerns, such as modulation transfer functions...

  10. Amateur astronomers in support of observing campaigns

    Science.gov (United States)

    Yanamandra-Fisher, P.

    2014-07-01

    The Pro-Am Collaborative Astronomy (PACA) project evolved from the observational campaign of C/2012 S1 or C/ISON. The success of the paradigm shift in scientific research is now implemented in other comet observing campaigns. While PACA identifies a consistent collaborative approach to pro-am collaborations, given the volume of data generated for each campaign, new ways of rapid data analysis, mining access, and storage are needed. Several interesting results emerged from the synergistic inclusion of both social media and amateur astronomers: - the establishment of a network of astronomers and related professionals that can be galvanized into action on short notice to support observing campaigns; - assist in various science investigations pertinent to the campaign; - provide an alert-sounding mechanism should the need arise; - immediate outreach and dissemination of results via our media/blogger members; - provide a forum for discussions between the imagers and modelers to help strategize the observing campaign for maximum benefit. In 2014, two new comet observing campaigns involving pro-am collaborations have been identified: (1) C/2013 A1 (C/Siding Spring) and (2) 67P/Churyumov-Gerasimenko (CG). The evolving need for individual customized observing campaigns has been incorporated into the evolution of PACA (Pro-Am Collaborative Astronomy) portal that currently is focused on comets: from supporting observing campaigns for current comets, legacy data, historical comets; interconnected with social media and a set of shareable documents addressing observational strategies; consistent standards for data; data access, use, and storage, to align with the needs of professional observers. The integration of science, observations by professional and amateur astronomers, and various social media provides a dynamic and evolving collaborative partnership between professional and amateur astronomers. The recent observation of comet 67P, at a magnitude of 21.2, from Siding

  11. Quaternion Fourier transforms for signal and image processing

    CERN Document Server

    Ell, Todd A; Sangwine, Stephen J

    2014-01-01

    Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.

  12. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    Science.gov (United States)

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  13. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan

    2015-04-01

    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  14. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  15. Comparative study of image restoration techniques in forensic image processing

    Science.gov (United States)

    Bijhold, Jurrien; Kuijper, Arjan; Westhuis, Jaap-Harm

    1997-02-01

    In this work we investigated the forensic applicability of some state-of-the-art image restoration techniques for digitized video-images and photographs: classical Wiener filtering, constrained maximum entropy, and some variants of constrained minimum total variation. Basic concepts and experimental results are discussed. Because all methods appeared to produce different results, a discussion is given of which method is the most suitable, depending on the image objects that are questioned, prior knowledge and type of blur and noise. Constrained minimum total variation methods produced the best results for test images with simulated noise and blur. In cases where images are the most substantial part of the evidence, constrained maximum entropy might be more suitable, because its theoretical basis predicts a restoration result that shows the most likely pixel values, given all the prior knowledge used during restoration.

  16. AstroGrid-D: Enhancing Astronomic Science with Grid Technology

    OpenAIRE

    Enke, H.; Steinmetz, M.; Radke, T.; Reiser, A.; Röblitz, T.; Högqvist, M.

    2007-01-01

    We present AstroGrid-D, a project bringing together astronomers and experts in Grid technology to enhance astronomic science in many aspects. First, by sharing currently dispersed resources, scientists can calculate their models in more detail. Second, by developing new mechanisms to efficiently access and process existing datasets, scientific problems can be investigated that were until now impossible to solve. Third, by adopting Grid technology large instruments such as roboti...

  17. Current status on image processing in medical fields in Japan

    International Nuclear Information System (INIS)

    Atsumi, Kazuhiko

    1979-01-01

    Information on medical images are classified in the two patterns. 1) off-line images on films-x-ray films, cell image, chromosome image etc. 2) on-line images detected through sensors, RI image, ultrasonic image, thermogram etc. These images are divided into three characteristic, two dimensional three dimensional and dynamic images. The research on medical image processing have been reported in several meeting in Japan and many fields on images have been studied on RI, thermogram, x-ray film, x-ray-TV image, cancer cell, blood cell, bacteria, chromosome, ultrasonics, and vascular image. Processing on TI image useful and easy because of their digital displays. Software on smoothing, restoration (iterative approximation), fourier transformation, differentiation and subtration. Image on stomach and chest x-ray films have been processed automatically utilizing computer system. Computed Tomography apparatuses have been already developed in Japan and automated screening instruments on cancer cells and recently on blood cells classification have been also developed. Acoustical holography imaging and moire topography have been also studied in Japan. (author)

  18. High-speed image processing and viewing system

    International Nuclear Information System (INIS)

    Saito, K.; Yamagishi, I.; Nomura, S.; Abe, T.; Smith, C.R.

    1986-01-01

    The authors achieved high-speed image processing using the computation power of 100 million arithmetic operations per second of a processor that is mainly used for CT image reconstruction from raw data. Image processing can be done in parallel with image reconstruction by time-sharing. The viewing system without frame buffer can directly access a 32-megabyte-wide main memory, and high-speed cine-display is easily achieved because image data transmission is not involved. These technical features allow real-time multiplanar image reconstruction which generates sagittal, coronal, or oblique images from axial images after a trackball-controlled cursor operation. There is no response delay, and the technique has proved effective for observing complex three-dimensional structures instantaneously. The processor is a general purpose one and can perform many other image processing routines

  19. Fake currency detection using image processing

    Science.gov (United States)

    Agasti, Tushar; Burand, Gajanan; Wade, Pratik; Chitra, P.

    2017-11-01

    The advancement of color printing technology has increased the rate of fake currency note printing and duplicating the notes on a very large scale. Few years back, the printing could be done in a print house, but now anyone can print a currency note with maximum accuracy using a simple laser printer. As a result the issue of fake notes instead of the genuine ones has been increased very largely. India has been unfortunately cursed with the problems like corruption and black money. And counterfeit of currency notes is also a big problem to it. This leads to design of a system that detects the fake currency note in a less time and in a more efficient manner. The proposed system gives an approach to verify the Indian currency notes. Verification of currency note is done by the concepts of image processing. This article describes extraction of various features of Indian currency notes. MATLAB software is used to extract the features of the note. The proposed system has got advantages like simplicity and high performance speed. The result will predict whether the currency note is fake or not.

  20. Multiscale image processing and antiscatter grids in digital radiography.

    Science.gov (United States)

    Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D

    2009-01-01

    Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.

  1. Microelectronic devices digital X-ray image processing method development

    Science.gov (United States)

    Staroverov, N. E.; Gryaznov, A. Yu; Kholopova, E. D.; Guk, K. K.

    2018-02-01

    In this paper microelectronic devices digital X-ray image processing method development is described. The main steps of the algorithm work are presented. The results of using the algorithm for improving the printed circuit board image are shown

  2. Video image processing to create a speed sensor

    Science.gov (United States)

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  3. Applications of Image Processing to Mine WarFare Sonar

    National Research Council Canada - National Science Library

    Perry, Stuart

    2000-01-01

    ...-like objects in various modes of sonar imagery. Image processing techniques to improve the performance of mine hunting operations using sector-scan, side-scan and the Acoustic Mine Imaging (AMI...

  4. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    Science.gov (United States)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  5. Viewpoints on Medical Image Processing: From Science to Application

    Science.gov (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  6. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  7. Advances in the Application of Image Processing Fruit Grading

    OpenAIRE

    Fang , Chengjun; Hua , Chunjian

    2013-01-01

    International audience; In the perspective of actual production, the paper presents the advances in the application of image processing fruit grading from several aspects, such as processing precision and processing speed of image processing technology. Furthermore, the different algorithms about detecting size, shape, color and defects are combined effectively to reduce the complexity of each algorithm and achieve a balance between the processing precision and processing speed are keys to au...

  8. Image processing techniques for digital orthophotoquad production

    Science.gov (United States)

    Hood, Joy J.; Ladner, L. J.; Champion, Richard A.

    1989-01-01

    Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.

  9. Evaluation of processing methods for static radioisotope scan images

    International Nuclear Information System (INIS)

    Oakberg, J.A.

    1976-12-01

    Radioisotope scanning in the field of nuclear medicine provides a method for the mapping of a radioactive drug in the human body to produce maps (images) which prove useful in detecting abnormalities in vital organs. At best, radioisotope scanning methods produce images with poor counting statistics. One solution to improving the body scan images is using dedicated small computers with appropriate software to process the scan data. Eleven methods for processing image data are compared

  10. Detection of Plant Diseases Using Image Processing Tools -A Overview

    OpenAIRE

    Asha R. Patil Varsha I.Pati; B.S.Panchbhai

    2017-01-01

    Analysis of plants disease is main goal for increase productivity of grain, fruits, vegetable. Detection of proper disease of plants using image processing is possible by different steps of it. Like image Acquisition, image enhancement, segmentation, feature extraction, and classification.RGB image is acquire and translate for processing and diagnosis of plant disease by CR-Network. Segmentation is used for which and how many areas are affected by disease using k-clustering. Future extraction...

  11. Digital image processing in NDT : Application to industrial radiography

    International Nuclear Information System (INIS)

    Aguirre, J.; Gonzales, C.; Pereira, D.

    1988-01-01

    Digital image processing techniques are applied to image enhancement discontinuity detection and characterization is radiographic test. Processing is performed mainly by image histogram modification, edge enhancement, texture and user interactive segmentation. Implementation was achieved in a microcomputer with video image capture system. Results are compared with those obtained through more specialized equipment main frame computers and high precision mechanical scanning digitisers. Procedures are intended as a precious stage for automatic defect detection

  12. The Application of Partial Differential Equations in Medical Image Processing

    OpenAIRE

    Mohammad Madadpour Inallou; Majid Pouladian; Bahman Mehri

    2013-01-01

    Mathematical models are the foundation of biomedical computing. Partial Differential Equations (PDEs) in Medical Imaging is concerned with acquiring images of the body for research, diagnosis and treatment. Biomedical Image Processing and its influence has undergoing a revolution in the past decade. Image processing has become an important component in contemporary science and technology and has been an interdisciplinary research field attracting expertise from applied mathematics, biology, c...

  13. An Image Processing Algorithm Based On FMAT

    Science.gov (United States)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  14. What Lies Behind NSF Astronomer Demographics? Subjectivities of Women, Minorities and Foreign-born Astronomers within Meshworks of Big Science Astronomy

    Science.gov (United States)

    Guillen, Reynal; Gu, D.; Holbrook, J.; Murillo, L. F.; Traweek, S.

    2011-01-01

    Our current research focuses on the trajectory of scientists working with large-scale databases in astronomy, following them as they strategically build their careers, digital infrastructures, and make their epistemological commitments. We look specifically at how gender, ethnicity, nationality intersect in the process of subject formation in astronomy, as well as in the process of enrolling partners for the construction of instruments, design and implementation of large-scale databases. Work once figured as merely technical support, such assembling data catalogs, or as graphic design, generating pleasing images for public support, has been repositioned at the core of the field. Some have argued that such databases enable a new kind of scientific inquiry based on data exploration, such as the "fourth paradigm" or "data-driven" science. Our preliminary findings based on oral history interviews and ethnography provide insights into meshworks of women, African-American, "Hispanic," Asian-American and foreign-born astronomers. Our preliminary data suggest African-American men are more successful in sustaining astronomy careers than Chicano and Asian-American men. A distinctive theme in our data is the glocal character of meshworks available to and created by foreign-born women astronomers working at US facilities. Other data show that the proportion of Asian to Asian American and foreign-born Latina/o to Chicana/o astronomers is approximately equal. Futhermore, Asians and Latinas/os are represented in significantly greater numbers than Asian Americans and Chicanas/os. Among professional astronomers in the US, each ethnic minority group is numbered on the order of tens, not hundreds. Project support is provided by the NSF EAGER program to University of California, Los Angeles under award 0956589.

  15. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  16. Effects of image processing on the detective quantum efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na [Yonsei University, Wonju (Korea, Republic of)

    2010-02-15

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  17. Effects of image processing on the detective quantum efficiency

    International Nuclear Information System (INIS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-01-01

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  18. Pre-Processes for Urban Areas Detection in SAR Images

    Science.gov (United States)

    Altay Açar, S.; Bayır, Ş.

    2017-11-01

    In this study, pre-processes for urban areas detection in synthetic aperture radar (SAR) images are examined. These pre-processes are image smoothing, thresholding and white coloured regions determination. Image smoothing is carried out to remove noises then thresholding is applied to obtain binary image. Finally, candidate urban areas are detected by using white coloured regions determination. All pre-processes are applied by utilizing the developed software. Two different SAR images which are acquired by TerraSAR-X are used in experimental study. Obtained results are shown visually.

  19. Study on Self-adapting Processing Method in Radiant Image

    International Nuclear Information System (INIS)

    Shen Kuan; Cai Yufang; Duan Liming

    2009-01-01

    This paper describes principle and character of digital radiography. After analyzing the drawbacks of current processing methods and specialty of collected signals, a new self-adapting method based on the wavelet transform is applied to process radiation image. The method maps the subsection of signal to 0-255 to form several gray images and then fuses these images to form a new enhanced image, then uses nonlinear color assignment scheme increasing the image resolution. The experiment results show that the self-adapting processing method is better than traditional ones. (authors)

  20. Design of image processing operation machine by FPGA

    OpenAIRE

    山部, 選; 堀田, 厚生

    2007-01-01

    An image processing system with a FPGA has been developed. the system has the following functions.1) BMP images taken with a digital camera and stored in a PC are transferred to a SDRAM on a board including a FPGA through a PCI bus.2) Two images are read from the SDRAM and are processed by background subtraction method in the FPGA, and the resulted image is stored into the SDRAM.3) The result image are transferred to the PC via a PCI bus, and displayed. Processing time of background subtracti...

  1. Signal Processing in Medical Ultrasound B-mode Imaging

    International Nuclear Information System (INIS)

    Song, Tai Kyong

    2000-01-01

    Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing

  2. Choosing and using astronomical eyepieces

    CERN Document Server

    Paolini, William

    2013-01-01

    This valuable reference fills a number of needs in the field of astronomical eyepieces, including that of a buyer's guide, observer's field guide and technical desk reference. It documents the past market for eyepieces and its evolution right up to the present day. In addition to appealing to practical astronomers - and potentially saving them money - it is useful both as a historical reference and as a detailed review of the current market place for this bustling astronomical consumer product. What distinguishes this book from other publications on astronomy is the involvement of observers from all aspects of the astronomical community, and also the major manufacturers of equipment. It not only catalogs the technical aspects of the many modern eyepieces but also documents amateur observer reactions and impressions of their utility over the years, using many different eyepieces. Eyepieces are the most talked-about accessories and collectible items available to the amateur astronomer. No other item of equi...

  3. Medical image processing on the GPU - past, present and future.

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Image Segmentation and Processing for Efficient Parking Space Analysis

    OpenAIRE

    Tutika, Chetan Sai; Vallapaneni, Charan; R, Karthik; KP, Bharath; Muthu, N Ruban Rajesh Kumar

    2018-01-01

    In this paper, we develop a method to detect vacant parking spaces in an environment with unclear segments and contours with the help of MATLAB image processing capabilities. Due to the anomalies present in the parking spaces, such as uneven illumination, distorted slot lines and overlapping of cars. The present-day conventional algorithms have difficulties processing the image for accurate results. The algorithm proposed uses a combination of image pre-processing and false contour detection ...

  5. Image-Processing Software For A Hypercube Computer

    Science.gov (United States)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  6. PARAGON-IPS: A Portable Imaging Software System For Multiple Generations Of Image Processing Hardware

    Science.gov (United States)

    Montelione, John

    1989-07-01

    Paragon-IPS is a comprehensive software system which is available on virtually all generations of image processing hardware. It is designed for an image processing department or a scientist and engineer who is doing image processing full-time. It is being used by leading R&D labs in government agencies and Fortune 500 companies. Applications include reconnaissance, non-destructive testing, remote sensing, medical imaging, etc.

  7. NEWVIEW: an interactive software environment for digital image processing

    Science.gov (United States)

    Lee, Ho J.; Vasudev, Bhaskaran; Lee, Daniel T. L.

    1990-06-01

    NEWV.EEW is a highly interactive software environment designed especially for the manipulation, processing, analysis and display of digital image (two-dimensional) data. It is designed using the paradigm of an algorithm developer's workbench to support a wide variety of digital image processing applications, from remote sensing to desktop publishing. The system consists of a comprehensive library of image processing algorithms and a library of fast, novel, raster rendering routines for display manipulation. Combined with a mouse-driven, multi-window display manager, it provides a unified and versatile environment for the development and testing of image processing algorithms.

  8. Measurement of surface crack length using image processing technology

    International Nuclear Information System (INIS)

    Nahm, Seung Hoon; Kim, Si Cheon; Kim, Yong Il; Ryu, Dae Hyun

    2001-01-01

    The development of a new experimental method is required to easily observe the growth behavior of fatigue cracks. To satisfy the requirement, an image processing technique was introduced to fatigue testing. The length of surface fatigue crack could be successfully measured by the image processing system. At first, the image data of cracks were stored into the computer while the cyclic loading was interrupted. After testing, crack length was determined using image processing software which was developed by ourselves. Block matching method was applied to the detection of surface fatigue cracks. By comparing the data measured by image processing system with the data measured by manual measurement with a microscope, the effectiveness of the image processing system was established. If the proposed method is used to monitor and observe the crack growth behavior automatically, the time and efforts for fatigue test could be dramatically reduced

  9. ASTONISHING IMAGES: TV news and accountability processes

    Directory of Open Access Journals (Sweden)

    Braulio B. Neves

    2011-02-01

    Full Text Available Several scholars acknowledge the important role that journalism
    has in promoting accountability. Precisely how television news
    images contribute to triggering accountability dynamics, however, remains virtually unexplored. With this in mind, this study aims at a theoretical specification regarding the potential of video images to provoke public debates supporting accountability. Taking into consideration a case of extreme police violence - “Favela Naval Event”, which occurred in Diadema, São Paulo, Brazil - the authors analyze how TV news constructs the denunciation of police brutality and shapes controversies regarding attribution of responsibilities. Several dimensions of accountability are addressed in a range of competitive contexts that underscore the debate concerning the
    meaning of such scandalous images. This study challenges the
    common sense view that images degenerate the public sphere.

  10. The South African Astronomical Observatory

    International Nuclear Information System (INIS)

    1989-01-01

    The research work discussed in this report covers a wide range, from work on the nearest stars to studies of the distant quasars, and the astronomers who have carried out this work come from universities and observatories spread around the world as well as from South African universities and from the South African Astronomical Observatory (SAAO) staff itself. A characteristic of much of this work has been its collaborative character. SAAO studies in 1989 included: supernovae 1987A; galaxies; ground-based observations of celestial x-ray sources; the Magellanic Clouds; pulsating variables; galactic structure; binary star phenomena; the provision of photometric standards; nebulous matter; stellar astrophysics, and astrometry

  11. Sliding mean edge estimation. [in digital image processing

    Science.gov (United States)

    Ford, G. E.

    1978-01-01

    A method for determining the locations of the major edges of objects in digital images is presented. The method is based on an algorithm utilizing maximum likelihood concepts. An image line-scan interval is processed to determine if an edge exists within the interval and its location. The proposed algorithm has demonstrated good results even in noisy images.

  12. Experiences with digital processing of images at INPE

    Science.gov (United States)

    Mascarenhas, N. D. A. (Principal Investigator)

    1984-01-01

    Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

  13. Masses of Negative Multinomial Distributions: Application to Polarimetric Image Processing

    Directory of Open Access Journals (Sweden)

    Philippe Bernardoff

    2013-01-01

    Full Text Available This paper derives new closed-form expressions for the masses of negative multinomial distributions. These masses can be maximized to determine the maximum likelihood estimator of its unknown parameters. An application to polarimetric image processing is investigated. We study the maximum likelihood estimators of the polarization degree of polarimetric images using different combinations of images.

  14. A color image processing pipeline for digital microscope

    Science.gov (United States)

    Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong

    2012-10-01

    Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

  15. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    Science.gov (United States)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  16. Post-processing for statistical image analysis in light microscopy.

    Science.gov (United States)

    Cardullo, Richard A; Hinchcliffe, Edward H

    2013-01-01

    Image processing of images serves a number of important functions including noise reduction, contrast enhancement, and feature extraction. Whatever the final goal, an understanding of the nature of image acquisition and digitization and subsequent mathematical manipulations of that digitized image is essential. Here we discuss the basic mathematical and statistical processes that are routinely used by microscopists to routinely produce high quality digital images and to extract key features of interest using a variety of extraction and thresholding tools. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Breast image pre-processing for mammographic tissue segmentation.

    Science.gov (United States)

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer

    2015-12-01

    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Use of personal computer image for processing a magnetic resonance image (MRI)

    International Nuclear Information System (INIS)

    Yamamoto, Tetsuo; Tanaka, Hitoshi

    1988-01-01

    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  19. Using Astronomical Photographs to Investigate Misconceptions about Galaxies and Spectra: Question Development for Clicker Use

    Science.gov (United States)

    Lee, Hyunju; Schneider, Stephen E.

    2015-01-01

    Many topics in introductory astronomy at the college or high-school level rely implicitly on using astronomical photographs and visual data in class. However, students bring many preconceptions to their understanding of these materials that ultimately lead to misconceptions, and research about students' interpretation of astronomical images has…

  20. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  1. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  2. Image processing technologies in nuclear power plant monitoring

    International Nuclear Information System (INIS)

    Kubo, Katsumi; Kanemoto, Shigeru; Shimada, Hideo.

    1995-01-01

    Various monitoring activities are carried out in nuclear power plants to ensure that the high reliability requirements of such plants are met. Inspection patrols by operators are important for detecting small anomalies in equipment. Vibration, temperature, and visual images are major forms of information used in equipment inspections. We are developing remote automatic inspection technologies comprising image sensing of equipment conditions and automatic recognition of the images. This paper shows examples of image processing technologies, such as equipment monitoring using three-dimensional graphic plant models and vibration/temperature image data, and intelligent image recognition technology for detecting steam leakage. (author)

  3. Application of morphological filtration to fast neutron image denoising processing

    International Nuclear Information System (INIS)

    Zhang Faqiang; China Academy of Engineering Physics, Mianyang; Yang Jianlun; Li Zhenghong

    2006-01-01

    Fast neutron radiography system is mainly composed by a scintillation fiber array and a scientific grade optical CCD. Fast neutron images obtained by the system always suffer a serious noise disturbance. In order to weaken pepper and salt noise and Poisson noise, morphological filtration is applied to fast neutron image denoising processing. The results indicate that for fast neutron images, morphological filtration operations with two-dimensional multi-directional structure element are effective to filter the noise and hold image details. (authors)

  4. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego

    2017-01-01

    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  5. Monitoring Car Drivers' Condition Using Image Processing

    Science.gov (United States)

    Adachi, Kazumasa; Yamamto, Nozomi; Yamamoto, Osami; Nakano, Tomoaki; Yamamoto, Shin

    We have developed a car driver monitoring system for measuring drivers' consciousness, with which we aim to reduce car accidents caused by drowsiness of drivers. The system consists of the following three subsystems: an image capturing system with a pulsed infrared CCD camera, a system for detecting blinking waveform by the images using a neural network with which we can extract images of face and eye areas, and a system for measuring drivers' consciousness analyzing the waveform with a fuzzy inference technique and others. The third subsystem extracts three factors from the waveform first, and analyzed them with a statistical method, while our previous system used only one factor. Our experiments showed that the three-factor method we used this time was more effective to measure drivers' consciousness than the one-factor method we described in the previous paper. Moreover, the method is more suitable for fitting parameters of the system to each individual driver.

  6. Embedded processor extensions for image processing

    Science.gov (United States)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  7. The operation technology of realtime image processing system (Datacube)

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Lee, Yong Bum; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Park, Jin Seok

    1997-02-01

    In this project, a Sparc VME-based MaxSparc system, running the solaris operating environment, is selected as the dedicated image processing hardware for robot vision applications. In this report, the operation of Datacube maxSparc system, which is high performance realtime image processing hardware, is systematized. And image flow example programs for running MaxSparc system are studied and analyzed. The state-of-the-arts of Datacube system utilizations are studied and analyzed. For the next phase, advanced realtime image processing platform for robot vision application is going to be developed. (author). 19 refs., 71 figs., 11 tabs.

  8. [A novel image processing and analysis system for medical images based on IDL language].

    Science.gov (United States)

    Tang, Min

    2009-08-01

    Medical image processing and analysis system, which is of great value in medical research and clinical diagnosis, has been a focal field in recent years. Interactive data language (IDL) has a vast library of built-in math, statistics, image analysis and information processing routines, therefore, it has become an ideal software for interactive analysis and visualization of two-dimensional and three-dimensional scientific datasets. The methodology is proposed to design a novel image processing and analysis system for medical images based on IDL. There are five functional modules in this system: Image Preprocessing, Image Segmentation, Image Reconstruction, Image Measurement and Image Management. Experimental results demonstrate that this system is effective and efficient, and it has the advantages of extensive applicability, friendly interaction, convenient extension and favorable transplantation.

  9. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  10. Study on the improvement of overall optical image quality via digital image processing

    Science.gov (United States)

    Tsai, Cheng-Mu; Fang, Yi Chin; Lin, Yu Chin

    2008-12-01

    This paper studies the effects of improving overall optical image quality via Digital Image Processing (DIP) and compares the promoted optical image with the non-processed optical image. Seen from the optical system, the improvement of image quality has a great influence on chromatic aberration and monochromatic aberration. However, overall image capture systems-such as cellphones and digital cameras-include not only the basic optical system but also many other factors, such as the electronic circuit system, transducer system, and so forth, whose quality can directly affect the image quality of the whole picture. Therefore, in this thesis Digital Image Processing technology is utilized to improve the overall image. It is shown via experiments that system modulation transfer function (MTF) based on the proposed DIP technology and applied to a comparatively bad optical system can be comparable to, even possibly superior to, the system MTF derived from a good optical system.

  11. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  12. Some results of automatic processing of images

    International Nuclear Information System (INIS)

    Golenishchev, I.A.; Gracheva, T.N.; Khardikov, S.V.

    1975-01-01

    The problems of automatic deciphering of the radiographic picture the purpose of which is making a conclusion concerning the quality of the inspected product on the basis of the product defect images in the picture are considered. The methods of defect image recognition are listed, and the algorithms and the class features of defects are described. The results of deciphering of a small radiographic picture by means of the ''Minsk-22'' computer are presented. It is established that the sensitivity of the method of the automatic deciphering is close to that obtained for visual deciphering

  13. Image processing for drift compensation in fluorescence microscopy

    DEFF Research Database (Denmark)

    Petersen, Steffen; Thiagarajan, Viruthachalam; Coutinho, Isabel

    2013-01-01

    Fluorescence microscopy is characterized by low background noise, thus a fluorescent object appears as an area of high signal/noise. Thermal gradients may result in apparent motion of the object, leading to a blurred image. Here, we have developed an image processing methodology that may remove....../reduce blur significantly for any type of microscopy. A total of ~100 images were acquired with a pixel size of 30 nm. The acquisition time for each image was approximately 1second. We can quantity the drift in X and Y using the sub pixel accuracy computed centroid location of an image object in each frame....... We can measure drifts down to approximately 10 nm in size and a drift-compensated image can therefore be reconstructed on a grid of the same size using the “Shift and Add” approach leading to an image of identical size asthe individual image. We have also reconstructed the image using a 3 fold larger...

  14. Detection of optimum maturity of maize using image processing and ...

    African Journals Online (AJOL)

    Detection of optimum maturity of maize using image processing and artificial neural networks. ... The leaves of maize are also very good source of food for grazing livestock like cows, goats, sheep, etc. However, in Nigeria ... of maturity. Keywords: Maize, Maturity, CCD Camera, Image Processing, Artificial Neural Network ...

  15. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  16. REVIEW ON LUNG CANCER DETECTION USING IMAGE PROCESSING TECHNIQUE

    OpenAIRE

    Anam Quadri; Rashida Shujaee; Nishat Khan

    2016-01-01

    This paper presents a review on the lung cancer detection method using image processing. In recent years the image processing mechanisms are used widely in several medical areas for improving earlier detection and treatment stages.Also the different procedures and design methodologies for the same have also been discussed

  17. Image Processing In Laser-Beam-Steering Subsystem

    Science.gov (United States)

    Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.

    1996-01-01

    Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

  18. Digital Data Processing of Images | Lotter | South African Medical ...

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  19. The study of image processing of parallel digital signal processor

    International Nuclear Information System (INIS)

    Liu Jie

    2000-01-01

    The author analyzes the basic characteristic of parallel DSP (digital signal processor) TMS320C80 and proposes related optimized image algorithm and the parallel processing method based on parallel DSP. The realtime for many image processing can be achieved in this way

  20. [Filing and processing systems of ultrasonic images in personal computers].

    Science.gov (United States)

    Filatov, I A; Bakhtin, D A; Orlov, A V

    1994-01-01

    The paper covers the software pattern for the ultrasonic image filing and processing system. The system records images on a computer display in real time or still, processes them by local filtration techniques, makes different measurements and stores the findings in the graphic database. It is stressed that the database should be implemented as a network version.

  1. Astronomical Spectroscopy A Short History

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 5. Astronomical Spectroscopy A Short History. J C Bhattacharyya. General Article Volume 3 Issue 5 May 1998 pp 24-29. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/003/05/0024-0029 ...

  2. Astronomical Spectroscopy A Short History

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 5. Astronomical Spectroscopy A Short History. J C Bhattacharyya. General Article Volume 3 Issue 5 May 1998 pp 24-29. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/003/05/0024-0029 ...

  3. Focus on astronomical predictable events

    DEFF Research Database (Denmark)

    Jacobsen, Aase Roland

    2006-01-01

    At the Steno Museum Planetarium we have for many occasions used a countdown clock to get focus om astronomical events. A countdown clock can provide actuality to predictable events, for example The Venus Transit, Opportunity landing on Mars and The Solar Eclipse. The movement of the clock attracs...

  4. Digital image processing for two-phase flow

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Young; Lim, Jae Yun [Cheju National University, Cheju (Korea, Republic of); No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1992-07-01

    A photographic method to measure the key parameters of two-phase flow is realized by using a digital image processing technique. The 8 bit gray level and 256 x 256 pixels are used to generates the image data which is treated to get the parameters of two-phase flow. It is observed that the key parameters could be identified by treating data obtained by the digital image processing technique.

  5. Implementing full backtracking facilities for Prolog-based image processing

    Science.gov (United States)

    Jones, Andrew C.; Batchelor, Bruce G.

    1995-10-01

    PIP (Prolog image processing) is a system currently under development at UWCC, designed to support interactive image processing using the PROLOG programming language. In this paper we discuss Prolog-based image processing paradigms and present a meta-interpreter developed by the first author, designed to support an approach to image processing in PIP which is more in the spirit of Prolog than was previously possible. This meta-interpreter allows backtracking over image processing operations in a manner transparent to the programmer. Currently, for space-efficiency, the programmer needs to indicate over which operations the system may backtrack in a program; however, a number of extensions to the present work, including a more intelligent approach intended to obviate this need, are mentioned at the end of this paper, which the present meta-interpreter will provide a basis for investigating in the future.

  6. 6th International Image Processing and Communications Conference

    CERN Document Server

    2015-01-01

    This book collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts and presents the proceedings of the 6th International Image Processing and Communications Conference (IP&C 2014) held in Bydgoszcz, 10-12 September 2014. Part I deals with image processing. A comprehensive survey of different methods  of image processing, computer vision  is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered.

  7. 8th International Image Processing and Communications Conference

    CERN Document Server

    2017-01-01

    This book collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts and presents the proceedings of the 8th International Image Processing and Communications Conference (IP&C 2016) held in Bydgoszcz, Poland September 7-9 2016. Part I deals with image processing. A comprehensive survey of different methods of image processing, computer vision is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered.

  8. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...... and compress the data. Algorithms for on-board processing of the image data are presented as well as evaluation of the performance. The main processing steps are event detection, image cropping and image compression. The on-board processing requirements are also evaluated....... of the mission is to study transient luminous events (TLE) above severe thunderstorms: the sprites, jets and elves. Other atmospheric phenomena are also studied including aurora, gravity waves and meteors. As part of the ASIM Phase B study, on-board processing of data from the cameras is being developed...

  9. The accidental astronomer

    Indian Academy of Sciences (India)

    Lawrence

    dized municipal schools which most middle class students attended. They had quite good facilities, well-equipped, spacious laborato- ries, good classrooms and huge playgrounds. We had some ... to make my Ph.D. a stressful, lengthy process, but I persisted and finally finished in about seven years time. I should mention ...

  10. Latin American astronomers and the International Astronomical Union

    Science.gov (United States)

    Torres-Peimbert, S.

    2017-07-01

    Selected aspects of the participation of the Latin American astronomers in the International Astronomical Union are presented: Membership, Governing bodies, IAU meetings, and other activities. The Union was founded in 1919 with 7 initial member states, soon to be followed by Brazil. In 1921 Mexico joined, and in 1928 Argentina also formed part of the Union, while Chile joined in 1947. In 1961 Argentina, Brazil, Chile, Mexico and Venezuela were already member countries. At present (October 2016) 72 countries contribute financially to the Union. The Union lists 12,391 professional astronomers as individual members; of those, 692 astronomers work in Latin America and the Caribbean, from 13 member states (Argentina, Bolivia , Brazil, Chile, Colombia, Costa Rica, Cuba, Honduras, Mexico, Panamá, Perú, Uruguay and Venezuela) as well as from Ecuador and Puerto Rico. This group comprises 5.58% of the total membership, a figure somewhat lower than the fraction of the population in the region, which is 8.6% of the world population. Of the Latin American members, 23.4% are women and 76.6% are men; slightly higher than the whole membership of Union, which is of 16.9%. In the governing bodies it can be mentioned that there have been 2 Presidents of the Union (Jorge Sahade and Silvia Torres-Peimbert), 7 VicePresidents (Guillermo Haro, Jorge Sahade, Manuel Peimbert Claudio Anguita, Silvia Torres-Peimbert, Beatriz Barbuy, and Marta G. Rovira). The IAU meetings held in the region, include 2 General Assemblies (the 1991 XXI GA took place in Buenos Aires, Argentina and the 2009 XXVIII GA, in Rio de Janeiro, Brazil), 15 Regional Meetings (in Argentina, Brazil, Chile, Colombia, Mexico, Venezuela and Uruguay), 29 Symposia (in Argentina, Brazil, Chile, Colombia, Costa Rica, Ecuador, Peru and Mexico), 5 Colloquia (in Argentina and Mexico), 8 International Schools for Young Astronomers (in Argentina, Brazil, Cuba, Honduras and Mexico), and 11 projects sponsored by the Office of Astronomy

  11. The Potential of Deep Learning with Astronomical Data

    Science.gov (United States)

    Schafer, Chad

    2017-06-01

    Modern astronomical surveys yield massive catalogs of noisy high-dimensional objects, e.g., images, spectra, and light curves. Valuable information stored in individual objects can be lost when ad hoc approaches of feature extraction are used in an effort to build data sets amenable to established data analysis tools. Deep learning procedures provide a promising avenue to enabling the use of data in their raw form and hence allowing both for estimates of greater accuracy and for novel discoveries with greater confidence. This talk will give an overview of deep learning and its potential in astronomical applications.

  12. Full Parallax Integral 3D Display and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Byung-Gook Lee

    2015-02-01

    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  13. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  14. Assessment of vessel diameters for MR brain angiography processed images

    Science.gov (United States)

    Moraru, Luminita; Obreja, Cristian-Dragos; Moldovanu, Simona

    2015-12-01

    The motivation was to develop an assessment method to measure (in)visible differences between the original and the processed images in MR brain angiography as a method of evaluation of the status of the vessel segments (i.e. the existence of the occlusion or intracerebral vessels damaged as aneurysms). Generally, the image quality is limited, so we improve the performance of the evaluation through digital image processing. The goal is to determine the best processing method that allows an accurate assessment of patients with cerebrovascular diseases. A total of 10 MR brain angiography images were processed by the following techniques: histogram equalization, Wiener filter, linear contrast adjustment, contrastlimited adaptive histogram equalization, bias correction and Marr-Hildreth filter. Each original image and their processed images were analyzed into the stacking procedure so that the same vessel and its corresponding diameter have been measured. Original and processed images were evaluated by measuring the vessel diameter (in pixels) on an established direction and for the precise anatomic location. The vessel diameter is calculated using the plugin ImageJ. Mean diameter measurements differ significantly across the same segment and for different processing techniques. The best results are provided by the Wiener filter and linear contrast adjustment methods and the worst by Marr-Hildreth filter.

  15. The astronomical tables of Giovanni Bianchini

    CERN Document Server

    Chabas, Jose

    2009-01-01

    This book describes and analyses, for the first time, the astronomical tables of Giovanni Bianchini of Ferrara (d. after 1469), explains their context, inserts them into an astronomical tradition that began in Toledo, and addresses their diffusion.

  16. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Algorithms for classification of astronomical object spectra

    Science.gov (United States)

    Wasiewicz, P.; Szuppe, J.; Hryniewicz, K.

    2015-09-01

    Obtaining interesting celestial objects from tens of thousands or even millions of recorded optical-ultraviolet spectra depends not only on the data quality but also on the accuracy of spectra decomposition. Additionally rapidly growing data volumes demands higher computing power and/or more efficient algorithms implementations. In this paper we speed up the process of substracting iron transitions and fitting Gaussian functions to emission peaks utilising C++ and OpenCL methods together with the NOSQL database. In this paper we implemented typical astronomical methods of detecting peaks in comparison to our previous hybrid methods implemented with CUDA.

  18. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    Science.gov (United States)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  19. Acquisition and Post-Processing of Immunohistochemical Images.

    Science.gov (United States)

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  20. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  1. The radon transform in digital image processing

    International Nuclear Information System (INIS)

    Beyerer, J.; Leon, F.P.

    2002-01-01

    The Radon transform develops an image according to a ''system of functions'' consisting of δ-lines. Thus, the Radon transform of a signal b(x 1 , x 2 ) represents the set of all parallel projections of b(x 1 , x 2 ). The Radon transform maps linear signal components onto pronounced extrema which can be detected very robustly. The coordinates of these peaks are reliable estimates of the geometrical parameters of collinear structures. In the paper, the increase of the signal-to-noise ratio for such structures in the Radon domain is discussed quantitatively. Moreover, the application of the Radon transform for image enhancement is demonstrated. Further topics concern its efficient implementation based on the central slice theorem, the connection with the Hough transform as well as examples on more complex applications. (orig.) [de

  2. Images of age: a reflexive process.

    Science.gov (United States)

    Blaikie, A

    1993-02-01

    Images of ageing are important for designers, as they reflect changing cultural trends among older people and the impact of older people on societies obsessed with youth and physical health. Stereotypes of ageing have been changing over the century and this can be reviewed by studying postcards and photographs. Content analysis of individual magazines is a sharper instrument for reflecting the change over past decades. These studies lead to the identification of style as the current keyword, set within the historical trends of the rise of consumer culture and the acceptibility of personal fulfillment in the Third age. They also lead to a realization that we (designers and image makers) can influence social understandings and expectations of age, not only for older people but for ourselves in later life.

  3. Image Processing and Analysis in Geotechnical Investigation

    Czech Academy of Sciences Publication Activity Database

    Ščučka, Jiří; Martinec, Petr; Šňupárek, Richard; Veselý, V.

    2006-01-01

    Roč. 21, 3-4 (2006), s. 1-6 ISSN 0886-7798. [AITES-ITA 2006 World Tunnel Congres and ITA General Assembly /32./. Seoul, 22.04.2006-27.04.2006] Institutional research plan: CEZ:AV0Z30860518 Keywords : underground working face * digital photography * image analysis Subject RIV: DB - Geology ; Mineralogy Impact factor: 0.278, year: 2006

  4. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  5. A study of image processing in CR mammography

    International Nuclear Information System (INIS)

    Muramatsu, Yukio; Nawano, Shigeru; Anan, Mitsuhiro; Hanmura, Katsuhiro; Tanaka, Takashi; Matsue, Hirohito; Yamada, Tatsuya

    1990-01-01

    CR image is made by several kinds of image processings. Gradation processing is most important to make images among them and its type can be decided by the kinds of x-ray examinations. 1.2G gradation processing is generally recommended for CR-mammography by the CR maker, but it has not been fully studied that whether 1.2G was the ideal one or not to image masses and/or calcifications. So, we compared the image obtained by 1.2G gradation processing with one by 1.0d gradation processing that we made about imaging sensitivity of them in 18 cases with breast cancer. 15 out of 18 cases had good mass images and all of 6 cases showed good calcified images in the latter condition due to its high declination (=γ). So, we have concluded that 1.0d gradation processing was better than 1.2G in CR-mammography. (author)

  6. Roles of medical image processing in medical physics

    International Nuclear Information System (INIS)

    Arimura, Hidetaka

    2011-01-01

    Image processing techniques including pattern recognition techniques play important roles in high precision diagnosis and radiation therapy. The author reviews a symposium on medical image information, which was held in the 100th Memorial Annual Meeting of the Japan Society of Medical Physics from September 23rd to 25th. In this symposium, we had three invited speakers, Dr. Akinobu Shimizu, Dr. Hideaki Haneishi, and Dr. Hirohito Mekata, who are active engineering researchers of segmentation, image registration, and pattern recognition, respectively. In this paper, the author reviews the roles of the medical imaging processing in medical physics field, and the talks of the three invited speakers. (author)

  7. Data management in pattern recognition and image processing systems

    Science.gov (United States)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  8. Adaptive Lee enhancement algorithm for processing drop_quality image

    Science.gov (United States)

    Wang, Xinsai; He, Jing; Li, Xue; He, Ming

    2007-12-01

    In order to transform image contrast and enhance vision effect the enhancement processing of image is need in the system of object recognition and automatic target tracking. The methods of image enhancement are more, for example, linearity, line segment, nonlinear, histogram equalization, histogram normalization, power exponent, local gray-level statistic feature and so on. Through analyzing gradient characteristic of image gray-level, in this paper the method of image enhancement is presented based on the local statistic arithmetic of neighborhood grads even. Experiment results show the correctness and utility of this method.

  9. Optical image processing by using a photorefractive spatial soliton waveguide

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Bao-Lai, E-mail: liangbaolai@gmail.com [College of Physics Science & Technology, Hebei University, Baoding 071002 (China); Wang, Ying; Zhang, Su-Heng; Guo, Qing-Lin; Wang, Shu-Fang; Fu, Guang-Sheng [College of Physics Science & Technology, Hebei University, Baoding 071002 (China); Simmonds, Paul J. [Department of Physics and Micron School of Materials Science & Engineering, Boise State University, Boise, ID 83725 (United States); Wang, Zhao-Qi [Institute of Modern Optics, Nankai University, Tianjin 300071 (China)

    2017-04-04

    By combining the photorefractive spatial soliton waveguide of a Ce:SBN crystal with a coherent 4-f system we are able to manipulate the spatial frequencies of an input optical image to perform edge-enhancement and direct component enhancement operations. Theoretical analysis of this optical image processor is presented to interpret the experimental observations. This work provides an approach for optical image processing by using photorefractive spatial solitons. - Highlights: • A coherent 4-f system with the spatial soliton waveguide as spatial frequency filter. • Manipulate the spatial frequencies of an input optical image. • Achieve edge-enhancement and direct component enhancement operations of an optical image.

  10. Radiographic testing with image processing by linear filtration method

    International Nuclear Information System (INIS)

    Gusev, E.A.; Petushkov, A.A.; Sosnin, F.R.; Chochia, P.A.

    1984-01-01

    A study was made on the effect of discrete linear filtration of upper spatial frequencies of metal disk radiographic image on the visual interpretation of the data on disk defects, presented on the image. Algorithm of discrete filtration is described. When processing the image according to described algorithm the general background is levelled, the local contrasts improve but the data on the unitial concrete value of optical density in each point of the image escape. Therefore it is useful to analyze several image variants - both before filtration and after it

  11. Sketching the moon an astronomical artist's guide

    CERN Document Server

    Handy, Richard; McCague, Thomas; Rix, Erika; Russell, Sally

    2012-01-01

    Soon after you begin studying the sky through your small telescope or binoculars, you will probably be encouraged by others to make sketches of what you see. Sketching is a time-honored tradition in amateur astronomy and dates back to the earliest times, when telescopes were invented. Even though we have lots of new imaging technologies nowadays, including astrophotography, most observers still use sketching to keep a record of what they see, make them better observers, and in hopes of perhaps contributing something to the body of scientific knowledge about the Moon. Some even sketch because it satisfies their artistic side. The Moon presents some unique challenges to the astronomer-artist, the Moon being so fond of tricks of the light. Sketching the Moon: An Astronomical Artist’s Guide, by five of the best lunar observer-artists working today, will guide you along your way and help you to achieve really high-quality sketches. All the major types of lunar features are covered, with a variety of sketching te...

  12. Suitable post processing algorithms for X-ray imaging using oversampled displaced multiple images

    International Nuclear Information System (INIS)

    Thim, J; Reza, S; Nawaz, K; Norlin, B; O'Nils, M; Oelmann, B

    2011-01-01

    X-ray imaging systems such as photon counting pixel detectors have a limited spatial resolution of the pixels, based on the complexity and processing technology of the readout electronics. For X-ray imaging situations where the features of interest are smaller than the imaging system pixel size, and the pixel size cannot be made smaller in the hardware, alternative means of resolution enhancement require to be considered. Oversampling with the usage of multiple displaced images, where the pixels of all images are mapped to a final resolution enhanced image, has proven a viable method of reaching a sub-pixel resolution exceeding the original resolution. The effectiveness of the oversampling method declines with the number of images taken, the sub-pixel resolution increases, but relative to a real reduction of imaging pixel sizes yielding a full resolution image, the perceived resolution from the sub-pixel oversampled image is lower. This is because the oversampling method introduces blurring noise into the mapped final images, and the blurring relative to full resolution images increases with the oversampling factor. One way of increasing the performance of the oversampling method is by sharpening the images in post processing. This paper focus on characterizing the performance increase of the oversampling method after the use of some suitable post processing filters, for digital X-ray images specifically. The results show that spatial domain filters and frequency domain filters of the same type yield indistinguishable results, which is to be expected. The results also show that the effectiveness of applying sharpening filters to oversampled multiple images increase with the number of images used (oversampling factor), leaving 60-80% of the original blurring noise after filtering a 6 x 6 mapped image (36 images taken), where the percentage is depending on the type of filter. This means that the effectiveness of the oversampling itself increase by using sharpening

  13. The Research Tools of the Virtual Astronomical Observatory

    Science.gov (United States)

    Hanisch, Robert J.; Berriman, G. B.; Lazio, T. J.; Project, VAO

    2013-01-01

    Astronomy is being transformed by the vast quantities of data, models, and simulations that are becoming available to astronomers at an ever-accelerating rate. The U.S. Virtual Astronomical Observatory (VAO) has been funded to provide an operational facility that is intended to be a resource for discovery and access of data, and to provide science services that use these data. Over the course of the past year, the VAO has been developing and releasing for community use five science tools: 1) "Iris", for dynamically building and analyzing spectral energy distributions, 2) a web-based data discovery tool that allows astronomers to identify and retrieve catalog, image, and spectral data on sources of interest, 3) a scalable cross-comparison service that allows astronomers to conduct pair-wise positional matches between very large catalogs stored remotely as well as between remote and local catalogs, 4) time series tools that allow astronomers to compute periodograms of the public data held at the NASA Star and Exoplanet Database (NStED) and the Harvard Time Series Center, and 5) A VO-aware release of the Image Reduction and Analysis Facility (IRAF) that provides transparent access to VO-available data collections and is SAMP-enabled, so that IRAF users can easily use tools such as Aladin and Topcat in conjuction with IRAF tasks. Additional VAO services will be built to make it easy for researchers to provide access to their data in VO-compliant ways, to build VO-enabled custom applications in Python, and to respond generally to the growing size and complexity of astronomy data. Acknowledgements: The Virtual Astronomical Observatory (VAO) is managed by the VAO, LLC, a non-profit company established as a partnership of the Associated Universities, Inc. and the Association of Universities for Research in Astronomy, Inc. The VAO is sponsored by the National Science Foundation and the National Aeronautics and Space Administration.

  14. Detecting jaundice by using digital image processing

    Science.gov (United States)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

    2014-03-01

    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  15. An Image Processing Approach to Linguistic Translation

    Science.gov (United States)

    Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari

    2011-12-01

    The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.

  16. High performance image processing of SPRINT

    Energy Technology Data Exchange (ETDEWEB)

    DeGroot, T. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  17. Effects of optimization and image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Kheddache, S.; Maansson, L.G.; Angelhed, J.E.; Denbratt, L.; Gottfridsson, B.; Schlossman, D.

    1991-01-01

    A digital system for chest radiography based on a large image intensifier was compared to a conventional film-screen system. The digital system was optimized with regard to spatial and contrast resolution and dose. The images were digitally processed for contrast and edge enhancement. A simulated pneumothorax and two and two simulated nodules were positioned over the lungs and the mediastinum of an anthro-pomorphic phantom. Observer performance was evaluated with Receiver Operating Characteristic (ROC) analysis. Five observers assessed the processed digital images and the conventional full-size radiographs. The time spent viewing the full-size radiographs and the digital images was recorded. For the simulated pneumothorax, the results showed perfect performance for the full-size radiographs and detectability was high also for the processed digital images. No significant differences in the detectability of the simulated nodules was seen between the two imaging systems. The results for the digital images showed a significantly improved detectability for the nodules in the mediastinum as compared to a previous ROC study where no optimization and image processing was available. No significant difference in detectability was seen between the former and the present ROC study for small nodules in the lung. No difference was seen in the time spent assessing the conventional full-size radiographs and the digital images. The study indicates that processed digital images produced by a large image intensifier are equal in image quality to conventional full-size radiographs for low-contrast objects such as nodules. (author). 38 refs.; 4 figs.; 1 tab

  18. Poisson point processes imaging, tracking, and sensing

    CERN Document Server

    Streit, Roy L

    2010-01-01

    This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.

  19. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  20. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  1. Remote sensing models and methods for image processing

    CERN Document Server

    Schowengerdt, Robert A

    1997-01-01

    This book is a completely updated, greatly expanded version of the previously successful volume by the author. The Second Edition includes new results and data, and discusses a unified framework and rationale for designing and evaluating image processing algorithms.Written from the viewpoint that image processing supports remote sensing science, this book describes physical models for remote sensing phenomenology and sensors and how they contribute to models for remote-sensing data. The text then presents image processing techniques and interprets them in terms of these models. Spectral, s

  2. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG

    2016-02-01

    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  3. Fourth International Conference on Signal and Image Processing 2012

    CERN Document Server

    Kumar, S; ICSIP 2012

    2013-01-01

    The proceedings includes cutting-edge research articles from the Fourth International Conference on Signal and Image Processing (ICSIP), which is organised by Dr. N.G.P. Institute of Technology, Kalapatti, Coimbatore. The Conference provides academia and industry to discuss and present the latest technological advances and research results in the fields of theoretical, experimental, and application of signal, image and video processing. The book provides latest and most informative content from engineers and scientists in signal, image and video processing from around the world, which will benefit the future research community to work in a more cohesive and collaborative way.

  4. National Astronomical Observatory of Japan

    CERN Document Server

    Haubold, Hans J; UN/ESA/NASA Workshop on the International Heliophysical Year 2007 and Basic Space Science, hosted by the National Astronomical Observatory of Japan

    2010-01-01

    This book represents Volume II of the Proceedings of the UN/ESA/NASA Workshop on the International Heliophysical Year 2007 and Basic Space Science, hosted by the National Astronomical Observatory of Japan, Tokyo, 18 - 22 June, 2007. It covers two programme topics explored in this and past workshops of this nature: (i) non-extensive statistical mechanics as applicable to astrophysics, addressing q-distribution, fractional reaction and diffusion, and the reaction coefficient, as well as the Mittag-Leffler function and (ii) the TRIPOD concept, developed for astronomical telescope facilities. The companion publication, Volume I of the proceedings of this workshop, is a special issue in the journal Earth, Moon, and Planets, Volume 104, Numbers 1-4, April 2009.

  5. Astronomical optics and elasticity theory

    CERN Document Server

    Lemaitre, Gerard Rene

    2008-01-01

    Astronomical Optics and Elasticity Theory provides a very thorough and comprehensive account of what is known in this field. After an extensive introduction to optics and elasticity, the book discusses variable curvature and multimode deformable mirrors, as well as, in depth, active optics, its theory and applications. Further, optical design utilizing the Schmidt concept and various types of Schmidt correctors, as well as the elasticity theory of thin plates and shells are elaborated upon. Several active optics methods are developed for obtaining aberration corrected diffraction gratings. Further, a weakly conical shell theory of elasticity is elaborated for the aspherization of grazing incidence telescope mirrors. The very didactic and fairly easy-to-read presentation of the topic will enable PhD students and young researchers to actively participate in challenging astronomical optics and instrumentation projects.

  6. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  7. FunImageJ: a Lisp framework for scientific image processing.

    Science.gov (United States)

    Harrington, Kyle I S; Rueden, Curtis T; Eliceiri, Kevin W

    2018-03-01

    FunImageJ is a Lisp framework for scientific image processing built upon the ImageJ software ecosystem. The framework provides a natural functional-style for programming, while accounting for the performance requirements necessary in big data processing commonly encountered in biological image analysis. Freely available plugin to Fiji (http://fiji.sc/#download). Installation and use instructions available at http://imagej.net/FunImageJ. kharrington@uidaho.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  8. Anaximandro : astronomía

    OpenAIRE

    Alonso Bernal, Sonsoles

    2009-01-01

    Anaximander successfully speculated about the origin of the cosmos: an initial explosion which condensated fragments form the stars. He also worked as an empirical astronomer who observed with a helioscope the Sun’s gaseous surface and its protuberances. He observed Solar and Lunar expectrums of light, probably working with certain set of pinhole cameras that he could optimize with fitted mirrors. Anaximandro especuló acertadamente sobre el origen del cosmos: describe una explosión inicial...

  9. The South African Astronomical Observatory

    International Nuclear Information System (INIS)

    1988-01-01

    The geographical position, climate and equipment at the South African Astronomical Observatory (SAAO), together with the enthusiasm and efforts of SAAO scientific and technical staff and of visiting scientists, have enabled the Observatory to make a major contribution to the fields of astrophysics and cosmology. During 1987 the SAAO has been involved in studies of the following: supernovae; galaxies, including Seyfert galaxies; celestial x-ray sources; magellanic clouds; pulsating variables; galatic structure; binary star phenomena; nebulae; interstellar matter and stellar astrophysics

  10. Some aspects of image processing using foams

    Energy Technology Data Exchange (ETDEWEB)

    Tufaile, A., E-mail: tufaile@usp.br; Freire, M.V.; Tufaile, A.P.B.

    2014-08-28

    We have explored some concepts of chaotic dynamics and wave light transport in foams. Using some experiments, we have obtained the main features of light intensity distribution through foams. We are proposing a model for this phenomenon, based on the combination of two processes: a diffusive process and another one derived from chaotic dynamics. We have presented a short outline of the chaotic dynamics involving light scattering in foams. We also have studied the existence of caustics from scattering of light from foams, with typical patterns observed in the light diffraction in transparent films. The nonlinear geometry of the foam structure was explored in order to create optical elements, such as hyperbolic prisms and filters. - Highlights: • We have obtained the light scattering in foams using experiments. • We model the light transport in foams using a chaotic dynamics and a diffusive process. • An optical filter based on foam is proposed.

  11. Processing of hyperspectral medical images applications in dermatology using Matlab

    CERN Document Server

    Koprowski, Robert

    2017-01-01

    This book presents new methods of analyzing and processing hyperspectral medical images, which can be used in diagnostics, for example for dermatological images. The algorithms proposed are fully automatic and the results obtained are fully reproducible. Their operation was tested on a set of several thousands of hyperspectral images and they were implemented in Matlab. The presented source code can be used without licensing restrictions. This is a valuable resource for computer scientists, bioengineers, doctoral students, and dermatologists interested in contemporary analysis methods.

  12. Subband/Transform MATLAB Functions For Processing Images

    Science.gov (United States)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  13. Image processing with a cellular nonlinear network [rapid communication

    Science.gov (United States)

    Morfu, S.

    2005-08-01

    A cellular nonlinear network (CNN) based on uncoupled nonlinear oscillators is proposed for image processing purposes. It is shown theoretically and numerically that the contrast of an image loaded at the nodes of the CNN is strongly enhanced, even if this one is initially weak. An image inversion can be also obtained without reconfiguration of the network whereas a gray levels extraction can be performed with an additional threshold filtering. Lastly, an electronic implementation of this CNN is presented.

  14. Astronomical publications of Melbourne Observatory

    Science.gov (United States)

    Andropoulos, Jenny Ioanna

    2014-05-01

    During the second half of the 19th century and the first half of the 20th century, four well-equipped government observatories were maintained in Australia - in Melbourne, Sydney, Adelaide and Perth. These institutions conducted astronomical observations, often in the course of providing a local time service, and they also collected and collated meteorological data. As well, some of these observatories were involved at times in geodetic surveying, geomagnetic recording, gravity measurements, seismology, tide recording and physical standards, so the term "observatory" was being used in a rather broad sense! Despite the international renown that once applied to Williamstown and Melbourne Observatories, relatively little has been written by modern-day scholars about astronomical activities at these observatories. This research is intended to rectify this situation to some extent by gathering, cataloguing and analysing the published astronomical output of the two Observatories to see what contributions they made to science and society. It also compares their contributions with those of Sydney, Adelaide and Perth Observatories. Overall, Williamstown and Melbourne Observatories produced a prodigious amount of material on astronomy in scientific and technical journals, in reports and in newspapers. The other observatories more or less did likewise, so no observatory of those studied markedly outperformed the others in the long term, especially when account is taken of their relative resourcing in staff and equipment.

  15. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    Science.gov (United States)

    2008-01-01

    K. Kyriacou, C. Davatzikos, S. J. Zinreich, and R. N. Bryan, "Nonlinear elastic registration of brain images with tumor pathology using a...Colon Diverticulitis : Imaging with Low-Dose Unenhanced Multi-Detector Row CT," Radiology, vol. 237(1), pp. 189-196, 2005. [178] D. Sennst, M...ABSTRACT Title of Document: HIGH-PERFORMANCE 3D IMAGE PROCESSING ARCHITECTURES FOR IMAGE -GUIDED INTERVENTIONS Omkar

  17. Predictive images of postoperative levator resection outcome using image processing software.

    Science.gov (United States)

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  18. A new method of SC image processing for confluence estimation.

    Science.gov (United States)

    Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina

    2017-10-01

    Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Digital image processing and analysis for activated sludge wastewater treatment.

    Science.gov (United States)

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  20. A study of correlation technique on pyramid processed images

    Indian Academy of Sciences (India)

    The pyramid algorithm is potentially a powerful tool for advanced television image processing and for pattern recognition. An attempt is made to design and develop both hardware and software for a system which performs decomposition and reconstruction of digitized images by implementing the Burt pyramid algorithm.

  1. A study of correlation technique on pyramid processed images

    Indian Academy of Sciences (India)

    processed levels. In this paper results are presented in terms of RMS error between original and expanded images. Only still images are considered, and the hardware is ... the signal information in a particular band is computed using a small subregion of the ... A straightforward approach that uses standard, digital,.

  2. Image Processing : An Enabler for Future EO System Concepts

    NARCIS (Netherlands)

    Schutte, K.; Schwering, P.B.W.

    2010-01-01

    The current state of the art in electro-optics provides systems with high image quality for associated prices and less expensive systems with subsequent lower performance. This keynote will expound how image processing enables to obtain high quality imagery while utilizing affordable system

  3. Detection of Optimum Maturity of Maize Using Image Processing

    African Journals Online (AJOL)

    Ayuba et al.

    2017-04-13

    Apr 13, 2017 ... A CCD camera for image acquisition of the different green colorations of the maize leaves at maturity was used. Different color features were extracted from the image processing system. (MATLAB) and used as inputs to the artificial neural network that classify different levels of maturity. Keywords: Maize ...

  4. Leveraging Gaussian process approximations for rapid image overlay production

    CSIR Research Space (South Africa)

    Burke, Michael

    2017-10-01

    Full Text Available of similar quality,despite requiring significantly fewer model evaluations. This process is illustrated using a user-driven saliency generation problem. Here,pairwise image interest comparisons are used to infer underlying image interest and a Gaussian...

  5. Image processing for drift compensation in fluorescence microscopy

    DEFF Research Database (Denmark)

    Petersen, Steffen B.; Thiagarajan, Viruthachalam; Coutinho, Isabel

    2013-01-01

    Fluorescence microscopy is characterized by low background noise, thus a fluorescent object appears as an area of high signal/noise. Thermal gradients may result in apparent motion of the object, leading to a blurred image. Here, we have developed an image processing methodology that may remove...

  6. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail: M.H.Yap@lboro.ac.uk; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2010-03-15

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  7. Characterization of Periodically Poled Nonlinear Materials Using Digital Image Processing

    National Research Council Canada - National Science Library

    Alverson, James R

    2008-01-01

    .... A new approach based on image processing across an entire z+ or z- surface of a poled crystal allows for better quantification of the underlying domain structure and directly relates to device performance...

  8. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  9. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  10. Image processing for flight crew enhanced situation awareness

    Science.gov (United States)

    Roberts, Barry

    1993-01-01

    This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.

  11. Parallel Hyperspectral Image Processing on Distributed Multi-Cluster Systems

    NARCIS (Netherlands)

    Liu, F.; Seinstra, F.J.; Plaza, A.J.

    2011-01-01

    Computationally efficient processing of hyperspectral image cubes can be greatly beneficial in many application domains, including environmental modeling, risk/hazard prevention and response, and defense/security. As individual cluster computers often cannot satisfy the computational demands of

  12. Conducting Original, Hands-On Astronomical Research in the Classroom

    Science.gov (United States)

    Corneau, M. J.

    2009-12-01

    teachers to convey moderately complex computer science, optical, geographic, mathematical, informational and physical principles through hands-on telescope operations. In addition to the general studies aspects of classroom internet-based astronomy, Tzec Maun supports real science by enabling operators precisely point telescopes and acquire extremely faint, magnitude 19+ CCD images. Thanks to the creative Team of Photometrica (photometrica.org), my teams now have the ability to process and analyze images online and produce results in short order. Normally, astronomical data analysis packages cost greater than thousands of dollars for single license operations. Free to my team members, Photometrica allows students to upload their data to a cloud computing server and read precise photometric and/or astrometric results. I’m indebted to Michael and Geir for their support. The efficacy of student-based research is well documented. The Council on Undergraduate Research defines student research as, "an inquiry or investigation conducted by an undergraduate that makes an original intellectual or creative contribution to the discipline." (http://serc.carleton.edu/introgeo/studentresearch/What. Teaching from Tzec Maun in the classroom is the most original teaching research I can imagine. I very much look forward to presenting this program to the convened body.

  13. The Digital Microscope and Its Image Processing Utility

    Directory of Open Access Journals (Sweden)

    Tri Wahyu Supardi

    2011-12-01

    Full Text Available Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images of the object being observed. The proposed microscope is constructed from hardware components that can be easily found in Indonesia. The image processing software is capable of performing brightness adjustment, contrast enhancement, histogram equalization, scaling and cropping. The proposed digital microscope has a maximum magnification of 1600x, and image resolution can be varied from 320x240 pixels up to 2592x1944 pixels. The microscope was tested with various objects with a variety of magnification, and image processing was carried out on the image of the object. The results showed that the digital microscope and its image processing system were capable of enhancing the observed object and other operations in accordance with the user need. The digital microscope has eliminated the need for direct observation by human eye as with the traditional microscope.

  14. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  15. Diagnostic of Melanomas via Image Processing

    OpenAIRE

    Hochmuth, Olaf; Meffert, Beate

    1993-01-01

    Non-contact measuring of skin temperature is used in dermatology as diagnostic method for malignant melanomas and other skin deseases. The method helps to examine pathological processes under the skin and is useful for a decision an minimal-surgical or non-surgical therapy. In a first step of investigation it is necessary to get knowledge about the required resolution in temperature and space. Peer Reviewed

  16. Arabidopsis Growth Simulation Using Image Processing Technology

    Directory of Open Access Journals (Sweden)

    Junmei Zhang

    2014-01-01

    Full Text Available This paper aims to provide a method to represent the virtual Arabidopsis plant at each growth stage. It includes simulating the shape and providing growth parameters. The shape is described with elliptic Fourier descriptors. First, the plant is segmented from the background with the chromatic coordinates. With the segmentation result, the outer boundary series are obtained by using boundary tracking algorithm. The elliptic Fourier analysis is then carried out to extract the coefficients of the contour. The coefficients require less storage than the original contour points and can be used to simulate the shape of the plant. The growth parameters include total area and the number of leaves of the plant. The total area is obtained with the number of the plant pixels and the image calibration result. The number of leaves is derived by detecting the apex of each leaf. It is achieved by using wavelet transform to identify the local maximum of the distance signal between the contour points and the region centroid. Experiment result shows that this method can record the growth stage of Arabidopsis plant with fewer data and provide a visual platform for plant growth research.

  17. Digital image processing applied Rock Art tracing

    Directory of Open Access Journals (Sweden)

    Montero Ruiz, Ignacio

    1998-06-01

    Full Text Available Adequate graphic recording has been one of the main objectives of rock art research. Photography has increased its role as a documentary technique. Now, digital image and its treatment allows new ways to observe the details of the figures and to develop a recording procedure which is as, or more, accurate than direct tracing. This technique also avoid deterioration of the rock paintings. The mathematical basis of this method is also presented.

    La correcta documentación del arte rupestre ha sido una preocupación constante por parte de los investigadores. En el desarrollo de nuevas técnicas de registro, directas e indirectas, la fotografía ha ido adquiriendo mayor protagonismo. La imagen digital y su tratamiento permiten nuevas posibilidades de observación de las figuras representadas y, en consecuencia, una lectura mediante la realización de calcos indirectos de tanta o mayor fiabilidad que la observación directa. Este sistema evita los riesgos de deterioro que provocan los calcos directos. Se incluyen las bases matemáticas que sustentan el método.

  18. Fundamental and applied aspects of astronomical seeing

    International Nuclear Information System (INIS)

    Coulman, C.E.

    1985-01-01

    It is pointed out that despite recent advances in the use of spacecraft as observatory platforms, much astronomy is still conducted from the surface of the earth. The literature on astronomical seeing and observatory site selection is widely scattered throughout journals and conference reports concerned with various disciplines. This survey has the objective to represent the state of the subject up to 1982. A description of the history and prospects of the considered subject is presented, and the optics of seeing are examined. The meteorology of seeing is discussed, taking into account aspects of micrometeorology and small-scale turbulence near the surface, the diurnal cycle in the planetary boundary layer, the temperature structure above the planetary boundary layer, and the effects of terrain. Attention is given to the calculation of system performance from microthermal data, optical methods for the measurement of seeing, and techniques for minimizing image-degrading effects of the atmosphere. 279 references

  19. Astronomical results from SHARC-II

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Moseley, S. Harvey; Dowell, C. Darren

    2004-03-01

    The Submillimeter High Angular Resolution Camera (SHARC-II) is a facility instrument for far-infrared (350μm) imaging at the Caltech Submillimeter Observatory (CSO). With 384 pixels, SHARC-II uses the world's largest bolometer array for astronomical observations. SHARC-II is most efficiently utilized for observations of extended sources and for deep sky surveys. The low 1/f detector noise allows total power measurements without the need to observe an emission free ``off position''. This is possible because the sky emission can be distinguished from the celestial emission when the array scans over the sky at sufficient speed. Here we present a representative set of SHARC-II observations, which highlight the capabilities of the instrument. The observations show the submillimeter continuum emission from our own Galactic center, the nearby galaxy M51, and the gravitationally lensed high-z Cloverleaf galaxy H1413+1143.

  20. Astronomical results from SHARC-II

    International Nuclear Information System (INIS)

    Staguhn, Johannes G.; Benford, Dominic J.; Moseley, S. Harvey; Dowell, C. Darren

    2004-01-01

    The Submillimeter High Angular Resolution Camera (SHARC-II) is a facility instrument for far-infrared (350 μm) imaging at the Caltech Submillimeter Observatory (CSO). With 384 pixels, SHARC-II uses the world's largest bolometer array for astronomical observations. SHARC-II is most efficiently utilized for observations of extended sources and for deep sky surveys. The low 1/f detector noise allows total power measurements without the need to observe an emission free 'off position'. This is possible because the sky emission can be distinguished from the celestial emission when the array scans over the sky at sufficient speed. Here we present a representative set of SHARC-II observations, which highlight the capabilities of the instrument. The observations show the submillimeter continuum emission from our own Galactic center, the nearby galaxy M51, and the gravitationally lensed high-z Cloverleaf galaxy H1413+1143