WorldWideScience

Sample records for mars color imager

  1. Curiosity's Mars Hand Lens Imager (MAHLI) Investigation

    Science.gov (United States)

    Edgett, Kenneth S.; Yingst, R. Aileen; Ravine, Michael A.; Caplinger, Michael A.; Maki, Justin N.; Ghaemi, F. Tony; Schaffner, Jacob A.; Bell, James F.; Edwards, Laurence J.; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sullivan, Robert J.; Sumner, Dawn Y.; Thomas, Peter C.; Jensen, Elsa H.; Simmonds, John J.; Sengstacken, Aaron J.; Wilson, Reg G.; Goetz, Walter

    2012-01-01

    The Mars Science Laboratory (MSL) Mars Hand Lens Imager (MAHLI) investigation will use a 2-megapixel color camera with a focusable macro lens aboard the rover, Curiosity, to investigate the stratigraphy and grain-scale texture, structure, mineralogy, and morphology of geologic materials in northwestern Gale crater. Of particular interest is the stratigraphic record of a ?5 km thick layered rock sequence exposed on the slopes of Aeolis Mons (also known as Mount Sharp). The instrument consists of three parts, a camera head mounted on the turret at the end of a robotic arm, an electronics and data storage assembly located inside the rover body, and a calibration target mounted on the robotic arm shoulder azimuth actuator housing. MAHLI can acquire in-focus images at working distances from ?2.1 cm to infinity. At the minimum working distance, image pixel scale is ?14 μm per pixel and very coarse silt grains can be resolved. At the working distance of the Mars Exploration Rover Microscopic Imager cameras aboard Spirit and Opportunity, MAHLI?s resolution is comparable at ?30 μm per pixel. Onboard capabilities include autofocus, auto-exposure, sub-framing, video imaging, Bayer pattern color interpolation, lossy and lossless compression, focus merging of up to 8 focus stack images, white light and longwave ultraviolet (365 nm) illumination of nearby subjects, and 8 gigabytes of non-volatile memory data storage.

  2. 77 FR 2935 - Mars, Inc.; Filing of Color Additive Petition

    Science.gov (United States)

    2012-01-20

    ... 73) Listing of Color Additives Exempt From Certification to provide for the safe use of spirulina.... FDA-2011-C-0878] Mars, Inc.; Filing of Color Additive Petition AGENCY: Food and Drug Administration... Mars, Inc., has filed a petition proposing that the color additive regulations be amended to provide...

  3. COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    Dominique Lafon

    2011-05-01

    Full Text Available The goal of this article is to present specific capabilities and limitations of the use of color digital images in a characterization process. The whole process is investigated, from the acquisition of digital color images to the analysis of the information relevant to various applications in the field of material characterization. A digital color image can be considered as a matrix of pixels with values expressed in a vector-space (commonly 3 dimensional space whose specificity, compared to grey-scale images, is to ensure a coding and a representation of the output image (visualisation printing that fits the human visual reality. In a characterization process, it is interesting to regard color image attnbutes as a set of visual aspect measurements on a material surface. Color measurement systems (spectrocolorimeters, colorimeters and radiometers and cameras use the same type of light detectors: most of them use Charge Coupled Devices sensors. The difference between the two types of color data acquisition systems is that color measurement systems provide a global information of the observed surface (average aspect of the surface: the color texture is not taken into account. Thus, it seems interesting to use imaging systems as measuring instruments for the quantitative characterization of the color texture.

  4. Digital color imaging

    CERN Document Server

    Fernandez-Maloigne, Christine; Macaire, Ludovic

    2013-01-01

    This collective work identifies the latest developments in the field of the automatic processing and analysis of digital color images.For researchers and students, it represents a critical state of the art on the scientific issues raised by the various steps constituting the chain of color image processing.It covers a wide range of topics related to computational color imaging, including color filtering and segmentation, color texture characterization, color invariant for object recognition, color and motion analysis, as well as color image and video indexing and retrieval. <

  5. Curiosity's Mars Hand Lens Imager (MAHLI): Inital Observations and Activities

    Science.gov (United States)

    Edgett, K. S.; Yingst, R. A.; Minitti, M. E.; Robinson, M. L.; Kennedy, M. R.; Lipkaman, L. J.; Jensen, E. H.; Anderson, R. C.; Bean, K. M.; Beegle, L. W.; hide

    2013-01-01

    MAHLI (Mars Hand Lens Imager) is a 2-megapixel focusable macro lens color camera on the turret on Curiosity's robotic arm. The investigation centers on stratigraphy, grain-scale texture, structure, mineralogy, and morphology of geologic materials at Curiosity's Gale robotic field site. MAHLI acquires focused images at working distances of 2.1 cm to infinity; for reference, at 2.1 cm the scale is 14 microns/pixel; at 6.9 cm it is 31 microns/pixel, like the Spirit and Opportunity Microscopic Imager (MI) cameras.

  6. Detailed mapping of surface units on Mars with HRSC color data

    Science.gov (United States)

    Combe, J.-Ph.; Wendt, L.; McCord, T. B.; Neukum, G.

    2008-09-01

    Introduction: Making use of HRSC color data Mapping outcrops of clays, sulfates and ferric oxides are basis information to derive the climatic, tectonic and volcanic evolution of Mars, especially the episodes related to the presence of liquid water. The challenge is to resolve spatially the outcrops and to distinguish these components from the globally-driven deposits like the iron oxide-rich bright red dust and the basaltic dark sands. The High Resolution Stereo Camera (HRSC) onboard Mars-Express has five color filters in the visible and near infrared that are designed for visual interpretation and mapping various surface units [1]. It provides also information on the topography at scale smaller than a pixel (roughness) thanks to the different geometry of observation for each color channel. The HRSC dataset is the only one that combines global coverage, 200 m/pixel spatial resolution or better and filtering colors of light. The present abstract is a work in progress (to be submitted to Planetary and Space Science) that shows the potential and limitations of HRSC color data as visual support and as multispectral images. Various methods are described from the most simple to more complex ones in order to demonstrate how to make use of the spectra, because of the specific steps of processing they require [2-4]. The objective is to broaden the popularity of HRSC color data, as they could be used more widely by the scientific community. Results prove that imaging spectrometry and HRSC color data complement each other for mapping outcrops types. Example regions of interest HRSC is theoretically sensitive to materials with absorption features in the visible and near-infrared up to 1 μm. Therefore, oxide-rich red dust and basalts (pyroxenes) can be mapped, as well as very bright components like water ice [5, 6]. Possible detection of other materials still has to be demonstrated. We first explore regions where unusual mineralogy appears clearly from spectral data. Hematite

  7. 'Endurance' Courtesy of Mars Express

    Science.gov (United States)

    2004-01-01

    NASA's Mars Exploration Rover Opportunity used its panoramic camera to capture this false-color image of the interior of 'Endurance Crater' on the rover's 188th martian day (Aug. 4, 2004). The image data were relayed to Earth by the European Space Agency's Mars Express orbiter. The image was generated from separate frames using the cameras 750-, 530- and 480-nanometer filters.

  8. The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity

    Science.gov (United States)

    Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.

    2009-08-01

    The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA

  9. Embedding Color Watermarks in Color Images

    Directory of Open Access Journals (Sweden)

    Wu Tung-Lin

    2003-01-01

    Full Text Available Robust watermarking with oblivious detection is essential to practical copyright protection of digital images. Effective exploitation of the characteristics of human visual perception to color stimuli helps to develop the watermarking scheme that fills the requirement. In this paper, an oblivious watermarking scheme that embeds color watermarks in color images is proposed. Through color gamut analysis and quantizer design, color watermarks are embedded by modifying quantization indices of color pixels without resulting in perceivable distortion. Only a small amount of information including the specification of color gamut, quantizer stepsize, and color tables is required to extract the watermark. Experimental results show that the proposed watermarking scheme is computationally simple and quite robust in face of various attacks such as cropping, low-pass filtering, white-noise addition, scaling, and JPEG compression with high compression ratios.

  10. 'Mars-shine'

    Science.gov (United States)

    2005-01-01

    [figure removed for brevity, see original site] 'Mars-shine' Composite NASA's Mars Exploration Rover Spirit continues to take advantage of favorable solar power conditions to conduct occasional nighttime astronomical observations from the summit region of 'Husband Hill.' Spirit has been observing the martian moons Phobos and Deimos to learn more about their orbits and surface properties. This has included observing eclipses. On Earth, a solar eclipse occurs when the Moon's orbit takes it exactly between the Sun and Earth, casting parts of Earth into shadow. A lunar eclipse occurs when the Earth is exactly between the Sun and the Moon, casting the Moon into shadow and often giving it a ghostly orange-reddish color. This color is created by sunlight reflected through Earth's atmosphere into the shadowed region. The primary difference between terrestrial and martian eclipses is that Mars' moons are too small to completely block the Sun from view during solar eclipses. Recently, Spirit observed a 'lunar' eclipse on Mars. Phobos, the larger of the two martian moons, was photographed while slipping into the shadow of Mars. Jim Bell, the astronomer in charge of the rover's panoramic camera (Pancam), suggested calling it a 'Phobal' eclipse rather than a lunar eclipse as a way of identifying which of the dozens of moons in our solar system was being cast into shadow. With the help of the Jet Propulsion Laboratory's navigation team, the Pancam team planned instructions to Spirit for acquiring the views shown here of Phobos as it entered into a lunar eclipse on the evening of the rover's 639th martian day, or sol (Oct. 20, 2005) on Mars. This image is a time-lapse composite of eight Pancam images of Phobos moving across the martian sky. The entire eclipse lasted more than 26 minutes, but Spirit was able to observe only in the first 15 minutes. During the time closest to the shadow crossing, Spirit's cameras were programmed to take images every 10 seconds. In the first three

  11. A universal color image quality metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated color space. The resulting color image quality index quantifies the distortion of a processed color image relative to its original version. We evaluated the new color image quality

  12. A Study of Color Transformation on Website Images for the Color Blind

    OpenAIRE

    Siew-Li Ching; Maziani Sabudin

    2010-01-01

    In this paper, we study on color transformation method on website images for the color blind. The most common category of color blindness is red-green color blindness which is viewed as beige color. By transforming the colors of the images, the color blind can improve their color visibility. They can have a better view when browsing through the websites. To transform colors on the website images, we study on two algorithms which are the conversion techniques from RGB colo...

  13. RGB Color Cube-Based Histogram Specification for Hue-Preserving Color Image Enhancement

    Directory of Open Access Journals (Sweden)

    Kohei Inoue

    2017-07-01

    Full Text Available A large number of color image enhancement methods are based on the methods for grayscale image enhancement in which the main interest is contrast enhancement. However, since colors usually have three attributes, including hue, saturation and intensity of more than only one attribute of grayscale values, the naive application of the methods for grayscale images to color images often results in unsatisfactory consequences. Conventional hue-preserving color image enhancement methods utilize histogram equalization (HE for enhancing the contrast. However, they cannot always enhance the saturation simultaneously. In this paper, we propose a histogram specification (HS method for enhancing the saturation in hue-preserving color image enhancement. The proposed method computes the target histogram for HS on the basis of the geometry of RGB (rad, green and blue color space, whose shape is a cube with a unit side length. Therefore, the proposed method includes no parameters to be set by users. Experimental results show that the proposed method achieves higher color saturation than recent parameter-free methods for hue-preserving color image enhancement. As a result, the proposed method can be used for an alternative method of HE in hue-preserving color image enhancement.

  14. Image Transform Based on the Distribution of Representative Colors for Color Deficient

    Science.gov (United States)

    Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru

    This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.

  15. Mars Pathfinder and Mars Global Surveyor Outreach Compilation

    Science.gov (United States)

    1999-09-01

    This videotape is a compilation of the best NASA JPL (Jet Propulsion Laboratory) videos of the Mars Pathfinder and Mars Global Surveyor missions. The mission is described using animation and narration as well as some actual footage of the entire sequence of mission events. Included within these animations are the spacecraft orbit insertion; descent to the Mars surface; deployment of the airbags and instruments; and exploration by Sojourner, the Mars rover. JPL activities at spacecraft control during significant mission events are also included at the end. The spacecraft cameras pan the surrounding Mars terrain and film Sojourner traversing the surface and inspecting rocks. A single, brief, processed image of the Cydonia region (Mars face) at an oblique angle from the Mars Global Surveyor is presented. A description of the Mars Pathfinder mission, instruments, landing and deployment process, Mars approach, spacecraft orbit insertion, rover operation are all described using computer animation. Actual color footage of Sojourner as well as a 360 deg pan of the Mars terrain surrounding the spacecraft is provided. Lower quality black and white photography depicting Sojourner traversing the Mars surface and inspecting Martian rocks also is included.

  16. Recent progress in color image intensifier

    International Nuclear Information System (INIS)

    Nittoh, K.

    2010-01-01

    A multi-color scintillator based high-sensitive, wide dynamic range and long-life X-ray image intensifier (Ultimage TM ) has been developed. Europium activated Y 2 O 2 S scintillator, emitting red, green and blue wavelength photons of different intensities, is utilized as the output fluorescent screen of the intensifier. By combining this image intensifier with a suitably tuned high sensitive color CCD camera, the sensitivity of the red color component achieved six times higher than that of the conventional image intensifier. Simultaneous emission of a moderate green color and a weak blue color covers different sensitivity regions. This widens the dynamic range by nearly two orders of magnitude. With this image intensifier, it is possible to image complex objects containing various different X-ray transmissions from paper, water or plastic to heavy metals at a time. This color scintillator based image intensifier is widely used in X-ray inspections of various fields. (author)

  17. Enriching text with images and colored light

    Science.gov (United States)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  18. Color image guided depth image super resolution using fusion filter

    Science.gov (United States)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  19. Colored Chaos

    Science.gov (United States)

    2004-01-01

    [figure removed for brevity, see original site] Released 7 May 2004 This daytime visible color image was collected on May 30, 2002 during the Southern Fall season in Atlantis Chaos. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -34.5, Longitude 183.6 East (176.4 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D

  20. Magnetic angioresonance of the carotid artery: correlation with color Doppler ultrasound

    International Nuclear Information System (INIS)

    Cotilla, J.; Miralles, M.; Cairols, M.C.; Dolz, J.L.; Vilanova, J.C.; Capdevila, A.

    1998-01-01

    To determine the value of magnetic angioresonance (MAR) in grading carotid stenosis, comparing it with color Doppler and intraarterial digital subtraction angiography (IADSA). A comparative study using color Doppler and MAR was carried out in 84 patients with coratid lesions. Fifty-two of the patients underwent angiographic study as well. The comparison of MAR versus arteriography in discriminating stenosis of more than 70%, expressed in terms of sensitivity specificity, overall precision and the kappa concordance index, gave values of 87.2, 90.8, 89.4 and 0.78%, respectively. When MAR was compared with color Doppler, the results were 86.8, 85.9, 86.3 and 0.72%, respectively. The results of the comparison between color Doppler and arteriography were 82.2, 86.2, 84.6 and 0.68%, respectively. The better correlation of MAR, as compares with angiography and color Doppler, with the grade of carotid stenosis indicates the high degree of reliability. The better correlation of MAR, as compares with angiography and color Doppler, with the grade of carotid stenosis indicates the high degree of reliability of this imaging technique. (Author) 29 refs

  1. Image color reduction method for color-defective observers using a color palette composed of 20 particular colors

    Science.gov (United States)

    Sakamoto, Takashi

    2015-01-01

    This study describes a color enhancement method that uses a color palette especially designed for protan and deutan defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color mapping. Complicated computation and image processing are not required by using the proposed method, and the method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors. Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced. After the reduction process is completed, color-defective observers can distinguish these confusion colors.

  2. Color enhancement in multispectral image of human skin

    Science.gov (United States)

    Mitsui, Masanori; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2003-07-01

    Multispectral imaging is receiving attention in medical color imaging, as high-fidelity color information can be acquired by the multispectral image capturing. On the other hand, as color enhancement in medical color image is effective for distinguishing lesion from normal part, we apply a new technique for color enhancement using multispectral image to enhance the features contained in a certain spectral band, without changing the average color distribution of original image. In this method, to keep the average color distribution, KL transform is applied to spectral data, and only high-order KL coefficients are amplified in the enhancement. Multispectral images of human skin of bruised arm are captured by 16-band multispectral camera, and the proposed color enhancement is applied. The resultant images are compared with the color images reproduced assuming CIE D65 illuminant (obtained by natural color reproduction technique). As a result, the proposed technique successfully visualizes unclear bruised lesions, which are almost invisible in natural color images. The proposed technique will provide support tool for the diagnosis in dermatology, visual examination in internal medicine, nursing care for preventing bedsore, and so on.

  3. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  4. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    Science.gov (United States)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  5. Optimizing color reproduction of natural images

    NARCIS (Netherlands)

    Yendrikhovskij, S.N.; Blommaert, F.J.J.; Ridder, de H.

    1998-01-01

    The paper elaborates on understanding, measuring and optimizing perceived color quality of natural images. We introduce a model for optimal color reproduction of natural scenes which is based on the assumption that color quality of natural images is constrained by perceived naturalness and

  6. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions

    Science.gov (United States)

    Malin, Michal C.; Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.

    2017-08-01

    The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from 1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the 2 m tall Remote Sensing Mast, have a 360° azimuth and 180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at 66 cm above the surface. Its fixed focus lens is in focus from 2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of 70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.

  7. Pseudo color ghost coding imaging with pseudo thermal light

    Science.gov (United States)

    Duan, De-yang; Xia, Yun-jie

    2018-04-01

    We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.

  8. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  9. Martian soil stratigraphy and rock coatings observed in color-enhanced Viking Lander images

    Science.gov (United States)

    Strickland, E. L., III

    1979-01-01

    Subtle color variations of martian surface materials were enhanced in eight Viking Lander (VL) color images. Well-defined soil units recognized at each site (six at VL-1 and four at VL-2), are identified on the basis of color, texture, morphology, and contact relations. The soil units at the Viking 2 site form a well-defined stratigraphic sequence, whereas the sequence at the Viking 1 site is only partially defined. The same relative soil colors occur at the two sites, suggesting that similar soil units are widespread on Mars. Several types of rock surface materials can be recognized at the two sites; dark, relatively 'blue' rock surfaces are probably minimally weathered igneous rock, whereas bright rock surfaces, with a green/(blue + red) ratio higher than that of any other surface material, are interpreted as a weathering product formed in situ on the rock. These rock surface types are common at both sites. Soil adhering to rocks is common at VL-2, but rare at VL-1. The mechanism that produces the weathering coating on rocks probably operates planet-wide.

  10. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    Directory of Open Access Journals (Sweden)

    Dina Khattab

    2014-01-01

    Full Text Available This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used.

  11. Guided color consistency optimization for image mosaicking

    Science.gov (United States)

    Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li

    2018-01-01

    This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.

  12. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  13. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    Science.gov (United States)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  14. Stamp Detection in Color Document Images

    DEFF Research Database (Denmark)

    Micenkova, Barbora; van Beusekom, Joost

    2011-01-01

    , moreover, it can be imprinted with a variable quality and rotation. Previous methods were restricted to detection of stamps of particular shapes or colors. The method presented in the paper includes segmentation of the image by color clustering and subsequent classification of candidate solutions...... by geometrical and color-related features. The approach allows for differentiation of stamps from other color objects in the document such as logos or texts. For the purpose of evaluation, a data set of 400 document images has been collected, annotated and made public. With the proposed method, recall of 83...

  15. Visual wetness perception based on image color statistics.

    Science.gov (United States)

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  16. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  17. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  18. Spatial imaging in color and HDR: prometheus unchained

    Science.gov (United States)

    McCann, John J.

    2013-03-01

    The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.

  19. Reconnaissance Imaging Spectrometer for Mars CRISM Data Analysis

    Science.gov (United States)

    Frink, K.; Hayden, D.; Lecompte, D.

    2009-05-01

    The Compact Reconnaissance Imaging Spectrometer for Mars CRISM (CRISM) carried aboard the Mars Reconnaissance Orbiter (MRO), is the first visible-infrared spectrometer to fly on a NASA Mars mission. CRISM scientists are using the instrument to look for the residue of minerals that form in the presence of water: the 'fingerprints' left by evaporated hot springs, thermal vents, lakes or ponds. With unprecedented clarity, CRISM is mapping regions on the Martian surface at scales as small as 60 feet (about 18 meters) across, when the spacecraft is 186 miles (300 kilometers) above the planet. CRISM is reading 544 'colors' in reflected sunlight to detect certain minerals on the surface, including signature traces of past water. CRISM alone will generate more than 10 terabytes of data, enough to fill more than 15,000 compact discs. Given that quantity of data being returned by MRO-CRISM, this project partners with Johns Hopkins University (JHU) Applied Physics Laboratory (APL) scientists of the CRISM team to assist in the data analysis process. The CRISM operations team has prototyped and will provide the necessary software analysis tools. In addition, the CRISM operations team will provide reduced data volume representations of the data as PNG files, accessible via a web interface without recourse to specialized user tools. The web interface allows me to recommend repeating certain of the CRISM observations as survey results indicate, and to enter notes on the features present in the images. After analysis of a small percentage of CRISM observations, APL scientists concluded that their efforts would be greatly facilitated by adding a preliminary survey to evaluate the overall characteristics and quality of the CRISM data. The first-look should increase the efficiency and speed of their data analysis efforts. This project provides first-look assessments of the data quality while noting features of interest likely to need further study or additional CRISM observations. The

  20. Natural color image segmentation using integrated mechanism

    Institute of Scientific and Technical Information of China (English)

    Jie Xu (徐杰); Pengfei Shi (施鹏飞)

    2003-01-01

    A new method for natural color image segmentation using integrated mechanism is proposed in this paper.Edges are first detected in term of the high phase congruency in the gray-level image. K-mean cluster is used to label long edge lines based on the global color information to estimate roughly the distribution of objects in the image, while short ones are merged based on their positions and local color differences to eliminate the negative affection caused by texture or other trivial features in image. Region growing technique is employed to achieve final segmentation results. The proposed method unifies edges, whole and local color distributions, as well as spatial information to solve the natural image segmentation problem.The feasibility and effectiveness of this method have been demonstrated by various experiments.

  1. Low contrast detectability for color patterns variation of display images

    International Nuclear Information System (INIS)

    Ogura, Akio

    1998-01-01

    In recent years, the radionuclide images are acquired in digital form and displayed with false colors for signal intensity. This color scales for signal intensity have various patterns. In this study, low contrast detectability was compared the performance of gray scale cording with three color scales: the hot color scale, prism color scale and stripe color scale. SPECT images of brain phantom were displayed using four color patterns. These printed images and display images were evaluated with ROC analysis. Display images were indicated higher detectability than printed images. The hot scale and gray scale images indicated better Az of ROC than prism scale images because the prism scale images showed higher false positive rate. (author)

  2. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images.

    Science.gov (United States)

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit

    2015-01-01

    Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.

  3. Color correction of projected image on color-screen for mobile beam-projector

    Science.gov (United States)

    Son, Chang-Hwan; Sung, Soo-Jin; Ha, Yeong-Ho

    2008-01-01

    With the current trend of digital convergence in mobile phones, mobile manufacturers are researching how to develop a mobile beam-projector to cope with the limitations of a small screen size and to offer a better feeling of movement while watching movies or satellite broadcasting. However, mobile beam-projectors may project an image on arbitrary surfaces, such as a colored wall and paper, not on a white screen mainly used in an office environment. Thus, color correction method for the projected image is proposed to achieve good image quality irrespective of the surface colors. Initially, luminance values of original image transformed into the YCbCr space are changed to compensate for spatially nonuniform luminance distribution of arbitrary surface, depending on the pixel values of surface image captured by mobile camera. Next, the chromaticity values for each surface and white-screen image are calculated using the ratio of the sum of three RGB values to one another. Then their chromaticity ratios are multiplied by converted original image through an inverse YCbCr matrix to reduce an influence of modulating the appearance of projected image due to spatially different reflectance on the surface. By projecting corrected original image on a texture pattern or single color surface, the image quality of projected image can be improved more, as well as that of projected image on white screen.

  4. Color Image Evaluation for Small Space Based on FA and GEP

    Directory of Open Access Journals (Sweden)

    Li Deng

    2014-01-01

    Full Text Available Aiming at the problem that color image is difficult to quantify, this paper proposes an evaluation method of color image for small space based on factor analysis (FA and gene expression programming (GEP and constructs a correlation model between color image factors and comprehensive color image. The basic color samples of small space and color images are evaluated by semantic differential method (SD method, color image factors are selected via dimension reduction in FA, factor score function is established, and by combining the entropy weight method to determine each factor weights then the comprehensive color image score is calculated finally. The best fitting function between color image factors and comprehensive color image is obtained by GEP algorithm, which can predict the users’ color image values. A color image evaluation system for small space is developed based on this model. The color evaluation of a control room on AC frequency conversion rig is taken as an example, verifying the effectiveness of the proposed method. It also can assist the designers in other color designs and provide a fast evaluation tool for testing users’ color image.

  5. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    Science.gov (United States)

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  6. Color standardization and optimization in Whole Slide Imaging

    Directory of Open Access Journals (Sweden)

    Yagi Yukako

    2011-03-01

    Full Text Available Abstract Introduction Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters, image processing and display factors in the digital systems themselves. Method We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart; the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI. The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. Discussion As a first step, the two slide method (above was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality – color.

  7. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images.

    Directory of Open Access Journals (Sweden)

    Jakob Nikolas Kather

    Full Text Available Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions.In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images.To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images.Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.

  8. Color doppler imaging of subclavian steal phenomenon

    International Nuclear Information System (INIS)

    Cho, Nari Ya; Chung, Tae Sub; Kim, Jai Keun

    1997-01-01

    To evaluate the characteristic color doppler imaging of vertebral artery flow in the subclavian steal phenomenon. The study group consisted of eight patients with reversed vertebral artery flow proved by color Doppler imaging. We classified this flow into two groups:(1) complete reversal;(2) partial reversal, as shown by Doppler velocity waveform. Vertebral angiography was performed in six of eight patients;color Doppler imaging and angiographic findings were compared. On color Doppler imaging, all eight cases with reversed vertebral artery flow showed no signal at the proximal subclavian or brachiocephalic artery. We confirmed shunting of six cases by performing angiography from the contralateral vertebral and basilar artery to the ipsilateral vertebral artery. On the Doppler spectrum, six cases showed complete reversal and two partial reversal. On angiography, one partial reversal case showed complete occlusion of the subclavian artery with abundant collateral circulation of muscular branches of the vertebral artery. On color Doppler imaging, a reversed vertebral artery suggests the subclavian steal phenomenon. In particular, partial reversal waveform may reflect collateral circulation

  9. Content-Based Image Retrieval Benchmarking: Utilizing color categories and color distributions

    NARCIS (Netherlands)

    van den Broek, Egon; Kisters, Peter M.F.; Vuurpijl, Louis G.

    From a human centered perspective three ingredients for Content-Based Image Retrieval (CBIR) were developed. First, with their existence confirmed by experimental data, 11 color categories were utilized for CBIR and used as input for a new color space segmentation technique. The complete HSI color

  10. Thermophysical Properties of Mars' North Polar Layered Deposits and Related Materials from Mars Odyssey THEMIS

    Science.gov (United States)

    Vasavada, A. R.; Richardson, M. I.; Byrne, S.; Ivanov, A. B.; Christensen, P. R.

    2003-01-01

    The presence of a thick sequence of horizontal layers of ice-rich material at Mars north pole, dissected by troughs and eroding at its margins, is undoubtedly telling us something about the evolution of Mars climate [1,2] we just don t know what yet. The North Polar Layered Deposits (NPLD) most likely formed as astronomically driven climate variations led to the deposition of conformable, areally extensive layers of ice and dust over the polar region. More recently, the balance seems to have fundamentally shifted to net erosion, as evidenced by the many troughs within the NPLD and the steep, arcuate scarps present near its margins, both of which expose layering. We defined a number of Regions of Interest ROI) for THEMIS to target as part of the Mars Odyssey Participating Scientist program. We use these THEMIS data in order to understand the morphology and color/thermal properties of the NPLD and related materials over relevant (i.e., m to km) spatial scales. We have assembled color mosaics of our ROIs in order to map the distribution of ices, the different layered units, dark material, and underlying basement. The color information from THEMIS is crucial for distinguishing these different units which are less distinct on Mars Orbiter Camera images. We wish to understand the nature of the marginal scarps and their relationship to the dark material. Our next, more ambitious goal is to derive the thermophysical properties of the different geologic materials using THEMIS and Mars Global Surveyor Thermal Emission Spectrometer TES) data.

  11. Naturalness and image quality : chroma and hue variation in color images of natural scenes

    NARCIS (Netherlands)

    Ridder, de H.; Blommaert, F.J.J.; Fedorovskaya, E.A.; Rogowitz, B.E.; Allebach, J.P.

    1995-01-01

    The relation between perceptual image quality and naturalness was investigated by varying the colorfulness and hue of color images of natural scenes. These variations were created by digitizing the images, subsequently determining their color point distributions in the CIELUV color space and finally

  12. Naturalness and image quality: Chroma and hue variation in color images of natural scenes

    NARCIS (Netherlands)

    Ridder, de H.; Blommaert, F.J.J.; Fedorovskaya, E.A.; Eschbach, R.; Braun, K.

    1997-01-01

    The relation between perceptual image quality and natural ness was investigated by varying the colorfulness and hue of color images of natural scenes. These variations were created by digitizing the images, subsequently determining their color point distributions in the CIELUV color space and

  13. Color imaging fundamentals and applications

    CERN Document Server

    Reinhard, Erik; Oguz Akyuz, Ahmet; Johnson, Garrett

    2008-01-01

    This book provides the reader with an understanding of what color is, where color comes from, and how color can be used correctly in many different applications. The authors first treat the physics of light and its interaction with matter at the atomic level, so that the origins of color can be appreciated. The intimate relationship between energy levels, orbital states, and electromagnetic waves helps to explain why diamonds shimmer, rubies are red, and the feathers of the Blue Jay are blue. Then, color theory is explained from its origin to the current state of the art, including image captu

  14. Animal Detection in Natural Images: Effects of Color and Image Database

    Science.gov (United States)

    Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.

    2013-01-01

    The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744

  15. Animal detection in natural images: effects of color and image database.

    Directory of Open Access Journals (Sweden)

    Weina Zhu

    Full Text Available The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used.

  16. Brain MR image segmentation using NAMS in pseudo-color.

    Science.gov (United States)

    Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong

    2017-12-01

    Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.

  17. Pseudo-color processing in nuclear medical image

    International Nuclear Information System (INIS)

    Wang Zhiqian; Jin Yongjie

    1992-01-01

    The application of pseudo-color technology in nuclear medical image processing is discussed. It includes selection of the number of pseudo-colors, method of realizing pseudo-color transformation, function of pseudo-color transformation and operation on the function

  18. CFA-aware features for steganalysis of color images

    Science.gov (United States)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  19. A framework for interactive image color editing

    KAUST Repository

    Musialski, Przemyslaw; Cui, Ming; Ye, Jieping; Razdan, Anshuman; Wonka, Peter

    2012-01-01

    We propose a new method for interactive image color replacement that creates smooth and naturally looking results with minimal user interaction. Our system expects as input a source image and rawly scribbled target color values and generates high

  20. Utilization of Multispectral Images for Meat Color Measurements

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup; Dahl, Anders Lindbjerg; Carstensen, Jens Michael

    2013-01-01

    This short paper describes how the use of multispectral imaging for color measurement can be utilized in an efficient and descriptive way for meat scientists. The basis of the study is meat color measurements performed with a multispectral imaging system as well as with a standard colorimeter...... of color and color variance than what is obtained by the standard colorimeter....

  1. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  2. Quantifying the effect of colorization enhancement on mammogram images

    Science.gov (United States)

    Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia

    2002-04-01

    Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).

  3. A framework for interactive image color editing

    KAUST Repository

    Musialski, Przemyslaw

    2012-11-08

    We propose a new method for interactive image color replacement that creates smooth and naturally looking results with minimal user interaction. Our system expects as input a source image and rawly scribbled target color values and generates high quality results in interactive rates. To achieve this goal we introduce an algorithm that preserves pairwise distances of the signatures in the original image and simultaneously maps the color to the user defined target values. We propose efficient sub-sampling in order to reduce the computational load and adapt semi-supervised locally linear embedding to optimize the constraints in one objective function. We show the application of the algorithm on typical photographs and compare the results to other color replacement methods. © 2012 Springer-Verlag Berlin Heidelberg.

  4. The Global and Local Characters of Mars Perihelion Cloud Trails

    Science.gov (United States)

    Clancy, R. T.; Wolff, M. J.; Smith, M. D.; Cantor, B. A.; Spiga, A.

    2014-12-01

    We present the seasonal and spatial distribution of Mars perihelion cloud trails as mapped from Mars Reconnaissance Orbiter (MRO) MARCI (Mars Color Imager) imaging observations in 2 ultraviolet and 3 visible filters. The extended 2007-2013 period of MARCI daily global image maps reveals the widespread distribution of these high altitude clouds, which are somewhat paradoxically associated with specific surface regions. They appear as longitudinally extended (300-700 km) cloud trails with distinct leading plumes of substantial ice cloud optical depths (0.02-0.2) for such high altitudes of occurrence (40-50 km, from cloud surface shadow measurements). These plumes generate small ice particles (Reff~1 to reflect locally elevated mesospheric water ice formation that may impact the global expression of mesospheric water ice aerosols.

  5. A Complete Color Normalization Approach to Histopathology Images Using Color Cues Computed From Saturation-Weighted Statistics.

    Science.gov (United States)

    Li, Xingyu; Plataniotis, Konstantinos N

    2015-07-01

    In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.

  6. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    Science.gov (United States)

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  7. Color Processing using Max-trees : A Comparison on Image Compression

    NARCIS (Netherlands)

    Tushabe, Florence; Wilkinson, M.H.F.

    2012-01-01

    This paper proposes a new method of processing color images using mathematical morphology techniques. It adapts the Max-tree image representation to accommodate color and other vectorial images. The proposed method introduces three new ways of transforming the color image into a gray scale image

  8. First CaSSIS Colour Images of Mars

    Science.gov (United States)

    Alfred, M.; Pommerol, A.; Thomas, N.; Cremonese, G.

    2017-12-01

    The Colour and Stereo Surface Imaging System (CaSSIS) on board ESA's Exomars Trace Gas Orbiter has acquired its first images of the surface of Mars on the 22nd and 26th of November, 2016. This commissioning campaign on the initial capture orbit was highly successful, allowing us to test the instrument, establish its performance and collect detailed images from the surface. Many of them have been publicly released within days following acquisition. These images and other commissioning data have demonstrated that the capabilities of the instrument are fully in-line with expectation. Although a colour image of Phobos produced from observations acquired on the 26th of November was rapidly released, the calibration and production of colour images from the surface of Mars proved to be more challenging. Having fixed technical issues, acquired and processed necessary in-flight calibration data, we have recently recalibrated the whole dataset, improving significantly the quality of the data and allowing us, for the first time, to produce high-quality colour images from the surface of Mars with CaSSIS data. The absolute calibration of the instrument is currently verified using stellar observations but the values of reflectivity obtained in each of the four colour channels for the surfaces of Mars and Phobos already show good consistency with other orbital data. The timing of CaSSIS acquisitions is very accurate and results in good colour matching, as already verified on-ground during the calibration campaign. The first few images acquired on the 22nd of November, shortly after TGO crossed the morning terminator, show unique views of the dusty terrains of the Tharsis region with solar incidence angle ranging between 60° and 80°. Comparison with images of the same areas acquired at later local times by other orbiters shows intriguing differences, related in particular to the brightness and colour of the floor of dust-filled craters that look bluer in the morning than in the

  9. Hiding Information Using different lighting Color images

    Science.gov (United States)

    Majead, Ahlam; Awad, Rash; Salman, Salema S.

    2018-05-01

    The host medium for the secret message is one of the important principles for the designers of steganography method. In this study, the best color image was studied to carrying any secret image.The steganography approach based Lifting Wavelet Transform (LWT) and Least Significant Bits (LSBs) substitution. The proposed method offers lossless and unnoticeable changes in the contrast carrier color image and imperceptible by human visual system (HVS), especially the host images which was captured in dark lighting conditions. The aim of the study was to study the process of masking the data in colored images with different light intensities. The effect of the masking process was examined on the images that are classified by a minimum distance and the amount of noise and distortion in the image. The histogram and statistical characteristics of the cover image the results showed the efficient use of images taken with different light intensities in hiding data using the least important bit substitution method. This method succeeded in concealing textual data without distorting the original image (low light) Lire developments due to the concealment process.The digital image segmentation technique was used to distinguish small areas with masking. The result is that smooth homogeneous areas are less affected as a result of hiding comparing with high light areas. It is possible to use dark color images to send any secret message between two persons for the purpose of secret communication with good security.

  10. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    Science.gov (United States)

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.

  11. Research of image retrieval technology based on color feature

    Science.gov (United States)

    Fu, Yanjun; Jiang, Guangyu; Chen, Fengying

    2009-10-01

    Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram

  12. Color-Based Image Retrieval from High-Similarity Image Databases

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Carstensen, Jens Michael

    2003-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce...... a method for HSID retrieval using a similarity measure based on a linear combination of Jeffreys-Matusita (JM) distances between distributions of color (and color derivatives) estimated from a set of automatically extracted image regions. The weight coefficients are estimated based on optimal retrieval...... performance. Experimental results on the difficult task of visually identifying clones of fungal colonies grown in a petri dish and categorization of pelts show a high retrieval accuracy of the method when combined with standardized sample preparation and image acquisition....

  13. Color Segmentation of Homogeneous Areas on Colposcopical Images

    Directory of Open Access Journals (Sweden)

    Kosteley Yana

    2016-01-01

    Full Text Available The article provides an analysis of image processing and color segmentation applied to the problem of selection of homogeneous regions in the parameters of the color model. Methods of image processing such as Gaussian filter, median filter, histogram equalization and mathematical morphology are considered. The segmentation algorithm with the parameters of color components is presented, followed by isolation of the resulting connected component of a binary segmentation mask. Analysis of methods performed on images colposcopic research.

  14. Obtention of tumor volumes in PET images stacks using techniques of colored image segmentation

    International Nuclear Information System (INIS)

    Vieira, Jose W.; Lopes Filho, Ferdinand J.; Vieira, Igor F.

    2014-01-01

    This work demonstrated step by step how to segment color images of the chest of an adult in order to separate the tumor volume without significantly changing the values of the components R (Red), G (Green) and B (blue) of the colors of the pixels. For having information which allow to build color map you need to segment and classify the colors present at appropriate intervals in images. The used segmentation technique is to select a small rectangle with color samples in a given region and then erase with a specific color called 'rubber' the other regions of image. The tumor region was segmented into one of the images available and the procedure is displayed in tutorial format. All necessary computational tools have been implemented in DIP (Digital Image Processing), software developed by the authors. The results obtained, in addition to permitting the construction the colorful map of the distribution of the concentration of activity in PET images will also be useful in future work to enter tumors in voxel phantoms in order to perform dosimetric assessments

  15. MARS spectral molecular imaging of lamb tissue: data collection and image analysis

    CERN Document Server

    Aamir, R; Bateman, C.J.; Butler, A.P.H.; Butler, P.H.; Anderson, N.G.; Bell, S.T.; Panta, R.K.; Healy, J.L.; Mohr, J.L.; Rajendran, K.; Walsh, M.F.; Ruiter, N.de; Gieseg, S.P.; Woodfield, T.; Renaud, P.F.; Brooke, L.; Abdul-Majid, S.; Clyne, M.; Glendenning, R.; Bones, P.J.; Billinghurst, M.; Bartneck, C.; Mandalika, H.; Grasset, R.; Schleich, N.; Scott, N.; Nik, S.J.; Opie, A.; Janmale, T.; Tang, D.N.; Kim, D.; Doesburg, R.M.; Zainon, R.; Ronaldson, J.P.; Cook, N.J.; Smithies, D.J.; Hodge, K.

    2014-01-01

    Spectral molecular imaging is a new imaging technique able to discriminate and quantify different components of tissue simultaneously at high spatial and high energy resolution. Our MARS scanner is an x-ray based small animal CT system designed to be used in the diagnostic energy range (20 to 140 keV). In this paper, we demonstrate the use of the MARS scanner, equipped with the Medipix3RX spectroscopic photon-processing detector, to discriminate fat, calcium, and water in tissue. We present data collected from a sample of lamb meat including bone as an illustrative example of human tissue imaging. The data is analyzed using our 3D Algebraic Reconstruction Algorithm (MARS-ART) and by material decomposition based on a constrained linear least squares algorithm. The results presented here clearly show the quantification of lipid-like, water-like and bone-like components of tissue. However, it is also clear to us that better algorithms could extract more information of clinical interest from our data. Because we ...

  16. New feature of the neutron color image intensifier

    Science.gov (United States)

    Nittoh, Koichi; Konagai, Chikara; Noji, Takashi; Miyabe, Keisuke

    2009-06-01

    We developed prototype neutron color image intensifiers with high-sensitivity, wide dynamic range and long-life characteristics. In the prototype intensifier (Gd-Type 1), a terbium-activated Gd 2O 2S is used as the input-screen phosphor. In the upgraded model (Gd-Type 2), Gd 2O 3 and CsI:Na are vacuum deposited to form the phosphor layer, which improved the sensitivity and the spatial uniformity. A europium-activated Y 2O 2S multi-color scintillator, emitting red, green and blue photons with different intensities, is utilized as the output screen of the intensifier. By combining this image intensifier with a suitably tuned high-sensitive color CCD camera, higher sensitivity and wider dynamic range could be simultaneously attained than that of the conventional P20-phosphor-type image intensifier. The results of experiments at the JRR-3M neutron radiography irradiation port (flux: 1.5×10 8 n/cm 2/s) showed that these neutron color image intensifiers can clearly image dynamic phenomena with a 30 frame/s video picture. It is expected that the color image intensifier will be used as a new two-dimensional neutron sensor in new application fields.

  17. New feature of the neutron color image intensifier

    International Nuclear Information System (INIS)

    Nittoh, Koichi; Konagai, Chikara; Noji, Takashi; Miyabe, Keisuke

    2009-01-01

    We developed prototype neutron color image intensifiers with high-sensitivity, wide dynamic range and long-life characteristics. In the prototype intensifier (Gd-Type 1), a terbium-activated Gd 2 O 2 S is used as the input-screen phosphor. In the upgraded model (Gd-Type 2), Gd 2 O 3 and CsI:Na are vacuum deposited to form the phosphor layer, which improved the sensitivity and the spatial uniformity. A europium-activated Y 2 O 2 S multi-color scintillator, emitting red, green and blue photons with different intensities, is utilized as the output screen of the intensifier. By combining this image intensifier with a suitably tuned high-sensitive color CCD camera, higher sensitivity and wider dynamic range could be simultaneously attained than that of the conventional P20-phosphor-type image intensifier. The results of experiments at the JRR-3M neutron radiography irradiation port (flux: 1.5x10 8 n/cm 2 /s) showed that these neutron color image intensifiers can clearly image dynamic phenomena with a 30 frame/s video picture. It is expected that the color image intensifier will be used as a new two-dimensional neutron sensor in new application fields.

  18. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  19. Scene recognition and colorization for vehicle infrared images

    Science.gov (United States)

    Hou, Junjie; Sun, Shaoyuan; Shen, Zhenyi; Huang, Zhen; Zhao, Haitao

    2016-10-01

    In order to make better use of infrared technology for driving assistance system, a scene recognition and colorization method is proposed in this paper. Various objects in a queried infrared image are detected and labelled with proper categories by a combination of SIFT-Flow and MRF model. The queried image is then colorized by assigning corresponding colors according to the categories of the objects appeared. The results show that the strategy here emphasizes important information of the IR images for human vision and could be used to broaden the application of IR images for vehicle driving.

  20. Curiosity’s robotic arm-mounted Mars Hand Lens Imager (MAHLI): Characterization and calibration status

    Science.gov (United States)

    Edgett, Kenneth S.; Caplinger, Michael A.; Maki, Justin N.; Ravine, Michael A.; Ghaemi, F. Tony; McNair, Sean; Herkenhoff, Kenneth E.; Duston, Brian M.; Wilson, Reg G.; Yingst, R. Aileen; Kennedy, Megan R.; Minitti, Michelle E.; Sengstacken, Aaron J.; Supulver, Kimberley D.; Lipkaman, Leslie J.; Krezoski, Gillian M.; McBride, Marie J.; Jones, Tessa L.; Nixon, Brian E.; Van Beek, Jason K.; Krysak, Daniel J.; Kirk, Randolph L.

    2015-01-01

    MAHLI (Mars Hand Lens Imager) is a 2-megapixel, Bayer pattern color CCD camera with a macro lens mounted on a rotatable turret at the end of the 2-meters-long robotic arm aboard the Mars Science Laboratory rover, Curiosity. The camera includes white and longwave ultraviolet LEDs to illuminate targets at night. Onboard data processing services include focus stack merging and data compression. Here we report on the results and status of MAHLI characterization and calibration, covering the pre-launch period from August 2008 through the early months of the extended surface mission through February 2015. Since landing in Gale crater in August 2012, MAHLI has been used for a wide range of science and engineering applications, including distinction among a variety of mafic, siliciclastic sedimentary rocks; investigation of grain-scale rock, regolith, and eolian sediment textures and structures; imaging of the landscape; inspection and monitoring of rover and science instrument hardware concerns; and supporting geologic sample selection, extraction, analysis, delivery, and documentation. The camera has a dust cover and focus mechanism actuated by a single stepper motor. The transparent cover was coated with a thin film of dust during landing, thus MAHLI is usually operated with the cover open. The camera focuses over a range from a working distance of 2.04 cm to infinity; the highest resolution images are at 13.9 µm per pixel; images acquired from 6.9 cm show features at the same scale as the Mars Exploration Rover Microscopic Imagers at 31 µm/pixel; and 100 µm/pixel is achieved at a working distance of ~26.5 cm. The very highest resolution images returned from Mars permit distinction of high contrast silt grains in the 30–40 µm size range. MAHLI has performed well; the images need no calibration in order to achieve most of the investigation’s science and engineering goals. The positioning and repeatability of robotic arm placement of the MAHLI camera head have

  1. Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images

    Science.gov (United States)

    Kruschwitz, Jennifer D. T.

    Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.

  2. Image mosaicking based on feature points using color-invariant values

    Science.gov (United States)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  3. An imaging colorimeter for noncontact tissue color mapping.

    Science.gov (United States)

    Balas, C

    1997-06-01

    There has been a considerable effort in several medical fields, for objective color analysis and characterization of biological tissues. Conventional colorimeters have proved inadequate for this purpose, since they do not provide spatial color information and because the measuring procedure randomly affects the color of the tissue. In this paper an imaging colorimeter is presented, where the nonimaging optical photodetector of colorimeters is replaced with the charge-coupled device (CCD) sensor of a color video camera, enabling the independent capturing of the color information for any spatial point within its field-of-view. Combining imaging and colorimetry methods, the acquired image is calibrated and corrected, under several ambient light conditions, providing noncontact reproducible color measurements and mapping, free of the errors and the limitations present in conventional colorimeters. This system was used for monitoring of blood supply changes of psoriatic plaques, that have undergone Psoralens and ultraviolet-A radiation (PUVA) therapy, where reproducible and reliable measurements were demonstrated. These features highlight the potential of the imaging colorimeters as clinical and research tools for the standardization of clinical diagnosis and for the objective evaluation of treatment effectiveness.

  4. Color and neighbor edge directional difference feature for image retrieval

    Institute of Scientific and Technical Information of China (English)

    Chaobing Huang; Shengsheng Yu; Jingli Zhou; Hongwei Lu

    2005-01-01

    @@ A novel image feature termed neighbor edge directional difference unit histogram is proposed, in which the neighbor edge directional difference unit is defined and computed for every pixel in the image, and is used to generate the neighbor edge directional difference unit histogram. This histogram and color histogram are used as feature indexes to retrieve color image. The feature is invariant to image scaling and translation and has more powerful descriptive for the natural color images. Experimental results show that the feature can achieve better retrieval performance than other color-spatial features.

  5. Color Histogram Diffusion for Image Enhancement

    Science.gov (United States)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  6. Content-based image retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Broek, E.L. van den; Vuurpijl, L.G.; Kisters, P. M. F.; Schmid, J.C.M. von; Moens, M.F.; Busser, R. de; Hiemstra, D.; Kraaij, W.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  7. Content-Based Image Retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Moens, Marie-Francine; van den Broek, Egon; Vuurpijl, L.G.; de Brusser, Rik; Kisters, P.M.F.; Hiemstra, Djoerd; Kraaij, Wessel; von Schmid, J.C.M.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  8. Tomographic Particle Image Velocimetry Using Colored Shadow Imaging

    KAUST Repository

    Alarfaj, Meshal K.

    2016-02-01

    Tomographic Particle Image Velocimetry Using Colored Shadow Imaging by Meshal K Alarfaj, Master of Science King Abdullah University of Science & Technology, 2015 Tomographic Particle image velocimetry (PIV) is a recent PIV method capable of reconstructing the full 3D velocity field of complex flows, within a 3-D volume. For nearly the last decade, it has become the most powerful tool for study of turbulent velocity fields and promises great advancements in the study of fluid mechanics. Among the early published studies, a good number of researches have suggested enhancements and optimizations of different aspects of this technique to improve the effectiveness. One major aspect, which is the core of the present work, is related to reducing the cost of the Tomographic PIV setup. In this thesis, we attempt to reduce this cost by using an experimental setup exploiting 4 commercial digital still cameras in combination with low-cost Light emitting diodes (LEDs). We use two different colors to distinguish the two light pulses. By using colored shadows with red and green LEDs, we can identify the particle locations within the measurement volume, at the two different times, thereby allowing calculation of the velocities. The present work tests this technique on the flows patterns of a jet ejected from a tube in a water tank. Results from the images processing are presented and challenges discussed.

  9. Color Multifocus Image Fusion Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    S. Savić

    2013-11-01

    Full Text Available In this paper, a recently proposed grayscale multifocus image fusion method based on the first level of Empirical Mode Decomposition (EMD has been extended to color images. In addition, this paper deals with low contrast multifocus image fusion. The major advantages of the proposed methods are simplicity, absence of artifacts and control of contrast, while this isn’t the case with other pyramidal multifocus fusion methods. The efficiency of the proposed method is tested subjectively and with a vector gradient based objective measure, that is proposed in this paper for multifocus color image fusion. Subjective analysis performed on a multifocus image dataset has shown its superiority to the existing EMD and DWT based methods. The objective measures of grayscale and color image fusion show significantly better scores for this method than for the classic complex EMD fusion method.

  10. Development of multi-color scintillator based X-ray image intensifier

    International Nuclear Information System (INIS)

    Nittoh, Koichi; Konagai, Chikara; Noji, Takashi

    2004-01-01

    A multi-color scintillator based high-sensitive, wide dynamic range and long-life X-ray image intensifier has been developed. An europium activated Y 2 O 2 S scintillator, emitting red, green and blue photons of different intensities, is utilized as the output fluorescent screen of the intensifier. By combining this image intensifier with a suitably tuned high sensitive color CCD camera, it is possible for a sensitivity of the red color component to become six times higher than that of the conventional image intensifier. Simultaneous emission of a moderate green color and a weak blue color covers different sensitivity regions. This widens the dynamic range, by nearly two orders of ten. With this image intensifier, it is possible to image simultaneously complex objects containing various different X-ray transmission from paper, water or plastic to heavy metals. This high sensitivity intensifier, operated at lower X-ray exposure, causes less degradation of scintillator materials and less colorization of output screen glass, and thus helps achieve a longer lifetime. This color scintillator based image intensifier is being introduced for X-ray inspection in various fields

  11. Correlating multispectral imaging and compositional data from the Mars Exploration Rovers and implications for Mars Science Laboratory

    Science.gov (United States)

    Anderson, Ryan B.; Bell, James F.

    2013-03-01

    In an effort to infer compositional information about distant targets based on multispectral imaging data, we investigated methods of relating Mars Exploration Rover (MER) Pancam multispectral remote sensing observations to in situ alpha particle X-ray spectrometer (APXS)-derived elemental abundances and Mössbauer (MB)-derived abundances of Fe-bearing phases at the MER field sites in Gusev crater and Meridiani Planum. The majority of the partial correlation coefficients between these data sets were not statistically significant. Restricting the targets to those that were abraded by the rock abrasion tool (RAT) led to improved Pearson’s correlations, most notably between the red-blue ratio (673 nm/434 nm) and Fe3+-bearing phases, but partial correlations were not statistically significant. Partial Least Squares (PLS) calculations relating Pancam 11-color visible to near-IR (VNIR; ∼400-1000 nm) “spectra” to APXS and Mössbauer element or mineral abundances showed generally poor performance, although the presence of compositional outliers led to improved PLS results for data from Meridiani. When the Meridiani PLS model for pyroxene was tested by predicting the pyroxene content of Gusev targets, the results were poor, indicating that the PLS models for Meridiani are not applicable to data from other sites. Soft Independent Modeling of Class Analogy (SIMCA) classification of Gusev crater data showed mixed results. Of the 24 Gusev test regions of interest (ROIs) with known classes, 11 had >30% of the pixels in the ROI classified correctly, while others were mis-classified or unclassified. k-Means clustering of APXS and Mössbauer data was used to assign Meridiani targets to compositional classes. The clustering-derived classes corresponded to meaningful geologic and/or color unit differences, and SIMCA classification using these classes was somewhat successful, with >30% of pixels correctly classified in 9 of the 11 ROIs with known classes. This work shows that

  12. Color quality improvement of reconstructed images in color digital holography using speckle method and spectral estimation

    Science.gov (United States)

    Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa

    2018-05-01

    In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.

  13. Preparing Colorful Astronomical Images and Illustrations

    Science.gov (United States)

    Levay, Z. G.; Frattare, L. M.

    2001-12-01

    We present techniques for using mainstream graphics software, specifically Adobe Photoshop and Illustrator, for producing composite color images and illustrations from astronomical data. These techniques have been used with numerous images from the Hubble Space Telescope to produce printed and web-based news, education and public presentation products as well as illustrations for technical publication. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels. These features, along with its user-oriented, visual interface, provide convenient tools to produce high-quality, full-color images and graphics for printed and on-line publication and presentation.

  14. Research on image complexity evaluation method based on color information

    Science.gov (United States)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  15. 'Clovis' in Color

    Science.gov (United States)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1 This approximate true-color image taken by the Mars Exploration Rover Spirit shows the rock outcrop dubbed 'Clovis.' The rock was discovered to be softer than other rocks studied so far at Gusev Crater after the rover easily ground a hole into it with its rock abrasion tool. This image was taken by the 750-, 530- and 480-nanometer filters of the rover's panoramic camera on sol 217 (August 13, 2004). Elemental Trio Found in 'Clovis' Figure 1 above shows that the interior of the rock dubbed 'Clovis' contains higher concentrations of sulfur, bromine and chlorine than basaltic, or volcanic, rocks studied so far at Gusev Crater. The data were taken by the Mars Exploration Rover Spirit's alpha particle X-ray spectrometer after the rover dug into Clovis with its rock abrasion tool. The findings might indicate that this rock was chemically altered, and that fluids once flowed through the rock depositing these elements.

  16. The Topography of Mars: Understanding the Surface of Mars Through the Mars Orbiter Laser Altimeter

    Science.gov (United States)

    Derby, C. A.; Neumann, G. A.; Sakimoto, S. E.

    2001-12-01

    The Mars Orbiter Laser Altimeter has been orbiting Mars since 1997 and has measured the topography of Mars with a meter of vertical accuracy. This new information has improved our understanding of both the surface and the interior of Mars. The topographic globe and the labeled topographic map of Mars illustrate these new data in a format that can be used in a classroom setting. The map is color shaded to show differences in elevation on Mars, presenting Mars with a different perspective than traditional geological and geographic maps. Through the differences in color, students can see Mars as a three-dimensional surface and will be able to recognize features that are invisible in imagery. The accompanying lesson plans are designed for middle school science students and can be used both to teach information about Mars as a planet and Mars in comparison to Earth, fitting both the solar system unit and the Earth science unit in a middle school curriculum. The lessons are referenced to the National Benchmark standards for students in grades 6-8 and cover topics such as Mars exploration, the Mars Orbiter Laser Altimeter, resolution and powers of 10, gravity, craters, seismic waves and the interior structure of a planet, isostasy, and volcanoes. Each lesson is written in the 5 E format and includes a student content activity and an extension showing current applications of Mars and MOLA data. These activities can be found at http://ltpwww.gsfc.nasa.gov/education/resources.html. Funding for this project was provided by the Maryland Space Grant Consortium and the MOLA Science Team, Goddard Space Flight Center.

  17. Finding text in color images

    Science.gov (United States)

    Zhou, Jiangying; Lopresti, Daniel P.; Tasdizen, Tolga

    1998-04-01

    In this paper, we consider the problem of locating and extracting text from WWW images. A previous algorithm based on color clustering and connected components analysis works well as long as the color of each character is relatively uniform and the typography is fairly simple. It breaks down quickly, however, when these assumptions are violated. In this paper, we describe more robust techniques for dealing with this challenging problem. We present an improved color clustering algorithm that measures similarity based on both RGB and spatial proximity. Layout analysis is also incorporated to handle more complex typography. THese changes significantly enhance the performance of our text detection procedure.

  18. Multifractal analysis of three-dimensional histogram from color images

    International Nuclear Information System (INIS)

    Chauveau, Julien; Rousseau, David; Richard, Paul; Chapeau-Blondeau, Francois

    2010-01-01

    Natural images, especially color or multicomponent images, are complex information-carrying signals. To contribute to the characterization of this complexity, we investigate the possibility of multiscale organization in the colorimetric structure of natural images. This is realized by means of a multifractal analysis applied to the three-dimensional histogram from natural color images. The observed behaviors are confronted to those of reference models with known multifractal properties. We use for this purpose synthetic random images with trivial monofractal behavior, and multidimensional multiplicative cascades known for their actual multifractal behavior. The behaviors observed on natural images exhibit similarities with those of the multifractal multiplicative cascades and display the signature of elaborate multiscale organizations stemming from the histograms of natural color images. This type of characterization of colorimetric properties can be helpful to various tasks of digital image processing, as for instance modeling, classification, indexing.

  19. Color reproduction and processing algorithm based on real-time mapping for endoscopic images.

    Science.gov (United States)

    Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A

    2016-01-01

    In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works.

  20. Estimation of color modification in digital images by CFA pattern change.

    Science.gov (United States)

    Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-03-10

    Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. IDENTIFYING SURFACE CHANGES ON HRSC IMAGES OF THE MARS SOUTH POLAR RESIDUAL CAP (SPRC

    Directory of Open Access Journals (Sweden)

    A. R. D. Putri

    2016-06-01

    Full Text Available The surface of Mars has been an object of interest for planetary research since the launch of Mariner 4 in 1964. Since then different cameras such as the Viking Visual Imaging Subsystem (VIS, Mars Global Surveyor (MGS Mars Orbiter Camera (MOC, and Mars Reconnaissance Orbiter (MRO Context Camera (CTX and High Resolution Imaging Science Experiment (HiRISE have been imaging its surface at ever higher resolution. The High Resolution Stereo Camera (HRSC on board of the European Space Agency (ESA Mars Express, has been imaging the Martian surface, since 25th December 2003 until the present-day. HRSC has covered 100 % of the surface of Mars, about 70 % of the surface with panchromatic images at 10-20 m/pixel, and about 98 % at better than 100 m/pixel (Neukum et. al., 2004, including the polar regions of Mars. The Mars polar regions have been studied intensively recently by analysing images taken by the Mars Express and MRO missions (Plaut et al., 2007. The South Polar Residual Cap (SPRC does not change very much in volume overall but there are numerous examples of dynamic phenomena associated with seasonal changes in the atmosphere. In particular, we can examine the time variation of layers of solid carbon dioxide and water ice with dust deposition (Bibring, 2004, spider-like channels (Piqueux et al., 2003 and so-called Swiss Cheese Terrain (Titus et al., 2004. Because of seasonal changes each Martian year, due to the sublimation and deposition of water and CO2 ice on the Martian south polar region, clearly identifiable surface changes occur in otherwise permanently icy region. In this research, good quality HRSC images of the Mars South Polar region are processed based on previous identification as the optimal coverage of clear surfaces (Campbell et al., 2015. HRSC images of the Martian South Pole are categorized in terms of quality, time, and location to find overlapping areas, processed into high quality Digital Terrain Models (DTMs and

  2. FNTD radiation dosimetry system enhanced with dual-color wide-field imaging

    International Nuclear Information System (INIS)

    Akselrod, M.S.; Fomenko, V.V.; Bartz, J.A.; Ding, F.

    2014-01-01

    At high neutron and photon doses Fluorescent Nuclear Track Detectors (FNTDs) require operation in analog mode and the measurement results depend on individual crystal color center concentration (coloration). We describe a new method for radiation dosimetry using FNTDs, which includes non-destructive, automatic sensitivity calibration for each individual FNTD. In the method presented, confocal laser scanning fluorescent imaging of FNTDs is combined with dual-color wide field imaging of the FNTD. The calibration is achieved by measuring the color center concentration in the detector through fluorescence imaging and reducing the effect of diffuse reflection on the lapped surface of the FNTD by imaging with infra-red (IR) light. The dual-color imaging of FNTDs is shown to provide a good estimation of the detector sensitivity at high doses of photons and neutrons, where conventional track counting is impeded by track overlap. - Highlights: • New method and optical imaging head was developed for FNTD used at high doses. • Dual-color wide-field imaging used for color center concentration measurement. • Green fluorescence corrected by diffuse reflection used for sensitivity correction. • FNTD dose measurements performed in analog processing mode

  3. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    Directory of Open Access Journals (Sweden)

    Tominaga Shoji

    2008-01-01

    Full Text Available Abstract The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the most promising research areas in color imaging science. This survey gives an overview about the issues, controversies, and problems of color image science. It focuses on human color vision, perception, and interpretation. It focuses also on acquisition systems, consumer imaging applications, and medical imaging applications. Next it gives a brief overview about the solutions, recommendations, most recent trends, and future trends of color image science. It focuses on color space, appearance models, color difference metrics, and color saliency. It focuses also on color features, color-based object tracking, scene illuminant estimation and color constancy, quality assessment and fidelity assessment, color characterization and calibration of a display device. It focuses on quantization, filtering and enhancement, segmentation, coding and compression, watermarking, and lastly on multispectral color image processing. Lastly, it addresses the research areas which still need addressing and which are the next and future perspectives of color in image and video processing.

  4. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    Directory of Open Access Journals (Sweden)

    Konstantinos N. Plataniotis

    2008-05-01

    Full Text Available The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the most promising research areas in color imaging science. This survey gives an overview about the issues, controversies, and problems of color image science. It focuses on human color vision, perception, and interpretation. It focuses also on acquisition systems, consumer imaging applications, and medical imaging applications. Next it gives a brief overview about the solutions, recommendations, most recent trends, and future trends of color image science. It focuses on color space, appearance models, color difference metrics, and color saliency. It focuses also on color features, color-based object tracking, scene illuminant estimation and color constancy, quality assessment and fidelity assessment, color characterization and calibration of a display device. It focuses on quantization, filtering and enhancement, segmentation, coding and compression, watermarking, and lastly on multispectral color image processing. Lastly, it addresses the research areas which still need addressing and which are the next and future perspectives of color in image and video processing.

  5. Emerging from Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer

    OpenAIRE

    Li, Chongyi; Guo, Jichang; Guo, Chunle

    2017-01-01

    Underwater vision suffers from severe effects due to selective attenuation and scattering when light propagates through water. Such degradation not only affects the quality of underwater images but limits the ability of vision tasks. Different from existing methods which either ignore the wavelength dependency of the attenuation or assume a specific spectral profile, we tackle color distortion problem of underwater image from a new view. In this letter, we propose a weakly supervised color tr...

  6. Exploring the use of memory colors for image enhancement

    Science.gov (United States)

    Xue, Su; Tan, Minghui; McNamara, Ann; Dorsey, Julie; Rushmeier, Holly

    2014-02-01

    Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.

  7. Information system for administrating and distributing color images through internet

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The information system for administrating and distributing color images through the Internet ensures the consistent replication of color images, their storage - in an on-line data base - and predictable distribution, by means of a digitally distributed flow, based on Windows platform and POD (Print On Demand technology. The consistent replication of color images inde-pendently from the parameters of the processing equipment and from the features of the programs composing the technological flow, is ensured by the standard color management sys-tem defined by ICC (International Color Consortium, which is integrated by the Windows operation system and by the POD technology. The latter minimize the noticeable differences between the colors captured, displayed or printed by various replication equipments and/or edited by various graphical applications. The system integrated web application ensures the uploading of the color images in an on-line database and their administration and distribution among the users via the Internet. For the preservation of the data expressed by the color im-ages during their transfer along a digitally distributed flow, the software application includes an original tool ensuring the accurate replication of colors on computer displays or when printing them by means of various color printers or presses. For development and use, this application employs a hardware platform based on PC support and a competitive software platform, based on: the Windows operation system, the .NET. Development medium and the C# programming language. This information system is beneficial for creators and users of color images, the success of the printed or on-line (Internet publications depending on the sizeable, predictable and accurate replication of colors employed for the visual expression of information in every activity fields of the modern society. The herein introduced information system enables all interested persons to access the

  8. Atmosphere Assessment for MARS Science Laboratory Entry, Descent and Landing Operations

    Science.gov (United States)

    Cianciolo, Alicia D.; Cantor, Bruce; Barnes, Jeff; Tyler, Daniel, Jr.; Rafkin, Scot; Chen, Allen; Kass, David; Mischna, Michael; Vasavada, Ashwin R.

    2013-01-01

    On August 6, 2012, the Mars Science Laboratory rover, Curiosity, successfully landed on the surface of Mars. The Entry, Descent and Landing (EDL) sequence was designed using atmospheric conditions estimated from mesoscale numerical models. The models, developed by two independent organizations (Oregon State University and the Southwest Research Institute), were validated against observations at Mars from three prior years. In the weeks and days before entry, the MSL "Council of Atmospheres" (CoA), a group of atmospheric scientists and modelers, instrument experts and EDL simulation engineers, evaluated the latest Mars data from orbiting assets including the Mars Reconnaissance Orbiter's Mars Color Imager (MARCI) and Mars Climate Sounder (MCS), as well as Mars Odyssey's Thermal Emission Imaging System (THEMIS). The observations were compared to the mesoscale models developed for EDL performance simulation to determine if a spacecraft parameter update was necessary prior to entry. This paper summarizes the daily atmosphere observations and comparison to the performance simulation atmosphere models. Options to modify the atmosphere model in the simulation to compensate for atmosphere effects are also presented. Finally, a summary of the CoA decisions and recommendations to the MSL project in the days leading up to EDL is provided.

  9. Formation of radiation images using photographic color film

    International Nuclear Information System (INIS)

    Kuge, Ken'ichi; Kobayashi, Takaharu; Hasegawa, Akira; Yasuda, Nakahiro; Kumagai, Hiroshi

    2001-01-01

    A new method to reveal the three-dimensional information of nuclear tracks in a nuclear emulsion layer was developed by the use of color photography. The tracks were represented with a color image in which different depths were indicated by different colors, and the three-dimensional information was obtained from color changes. We present the procedure for a self-made photographic coating and the development formula that can represent the color tracks clearly. (author)

  10. Perceptual quality of color images of natural scenes transformed in CIELUV color space

    NARCIS (Netherlands)

    Fedorovskaya, E.A.; Blommaert, F.J.J.; Ridder, de H.; Eschbach, R.; Braun, K.

    1997-01-01

    Transformations of digitized color images in perceptually-uniform CIELUV color space and their perceptual relevance were investigated. Chroma veriation was chosen as the first step of a series of investigations into possible transformations (including lightness, hue-angle, chroma, ect.) To obtain

  11. Perceptual quality of color images of natural scenes transformed in CIELUV color space

    NARCIS (Netherlands)

    Fedorovskaya, E.A.; Blommaert, F.J.J.; Ridder, de H.

    1993-01-01

    Transformations of digitized color images in perceptually-uniform CIELUV color space and their perceptual relevance were investigated. Chroma variation was chosen as the first step of a series of investigations into possible transformations (including lightness, hue-angle, chroma, etc.). To obtain

  12. Multispectral Imaging of Meat Quality - Color and Texture

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup

    transformations to the CIELAB color space, the common color space within food science. The results show that meat color assessment with a multispectral imaging is a great alternative to the traditional colorimeter, i.e. the vision system meets some of the limitations that the colorimeter possesses. To mention one...

  13. Segmentation of color images by chromaticity features using self-organizing maps

    Directory of Open Access Journals (Sweden)

    Farid García-Lamont

    2016-05-01

    Full Text Available Usually, the segmentation of color images is performed using cluster-based methods and the RGB space to represent the colors. The drawback with these methods is the a priori knowledge of the number of groups, or colors, in the image; besides, the RGB space issensitive to the intensity of the colors. Humans can identify different sections within a scene by the chromaticity of its colors of, as this is the feature humans employ to tell them apart. In this paper, we propose to emulate the human perception of color by training a self-organizing map (SOM with samples of chromaticity of different colors. The image to process is mapped to the HSV space because in this space the chromaticity is decoupled from the intensity, while in the RGB space this is not possible. Our proposal does not require knowing a priori the number of colors within a scene, and non-uniform illumination does not significantly affect the image segmentation. We present experimental results using some images from the Berkeley segmentation database by employing SOMs with different sizes, which are segmented successfully using only chromaticity features.

  14. Spatio-spectral color filter array design for optimal image recovery.

    Science.gov (United States)

    Hirakawa, Keigo; Wolfe, Patrick J

    2008-10-01

    In digital imaging applications, data are typically obtained via a spatial subsampling procedure implemented as a color filter array-a physical construction whereby only a single color value is measured at each pixel location. Owing to the growing ubiquity of color imaging and display devices, much recent work has focused on the implications of such arrays for subsequent digital processing, including in particular the canonical demosaicking task of reconstructing a full color image from spatially subsampled and incomplete color data acquired under a particular choice of array pattern. In contrast to the majority of the demosaicking literature, we consider here the problem of color filter array design and its implications for spatial reconstruction quality. We pose this problem formally as one of simultaneously maximizing the spectral radii of luminance and chrominance channels subject to perfect reconstruction, and-after proving sub-optimality of a wide class of existing array patterns-provide a constructive method for its solution that yields robust, new panchromatic designs implementable as subtractive colors. Empirical evaluations on multiple color image test sets support our theoretical results, and indicate the potential of these patterns to increase spatial resolution for fixed sensor size, and to contribute to improved reconstruction fidelity as well as significantly reduced hardware complexity.

  15. Color image quality in projection displays: a case study

    Science.gov (United States)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  16. Align and conquer: moving toward plug-and-play color imaging

    Science.gov (United States)

    Lee, Ho J.

    1996-03-01

    The rapid evolution of the low-cost color printing and image capture markets has precipitated a huge increase in the use of color imagery by casual end users on desktop systems, as opposed to traditional professional color users working with specialized equipment. While the cost of color equipment and software has decreased dramatically, the underlying system-level problems associated with color reproduction have remained the same, and in many cases are more difficult to address in a casual environment than in a professional setting. The proliferation of color imaging technologies so far has resulted in a wide availability of component solutions which work together poorly. A similar situation in the desktop computing market has led to the various `Plug-and-Play' standards, which provide a degree of interoperability between a range of products on disparate computing platforms. This presentation will discuss some of the underlying issues and emerging trends in the desktop and consumer digital color imaging markets.

  17. Color image enhancement of medical images using alpha-rooting and zonal alpha-rooting methods on 2D QDFT

    Science.gov (United States)

    Grigoryan, Artyom M.; John, Aparna; Agaian, Sos S.

    2017-03-01

    2-D quaternion discrete Fourier transform (2-D QDFT) is the Fourier transform applied to color images when the color images are considered in the quaternion space. The quaternion numbers are four dimensional hyper-complex numbers. Quaternion representation of color image allows us to see the color of the image as a single unit. In quaternion approach of color image enhancement, each color is seen as a vector. This permits us to see the merging effect of the color due to the combination of the primary colors. The color images are used to be processed by applying the respective algorithm onto each channels separately, and then, composing the color image from the processed channels. In this article, the alpha-rooting and zonal alpha-rooting methods are used with the 2-D QDFT. In the alpha-rooting method, the alpha-root of the transformed frequency values of the 2-D QDFT are determined before taking the inverse transform. In the zonal alpha-rooting method, the frequency spectrum of the 2-D QDFT is divided by different zones and the alpha-rooting is applied with different alpha values for different zones. The optimization of the choice of alpha values is done with the genetic algorithm. The visual perception of 3-D medical images is increased by changing the reference gray line.

  18. Automatic Detection of Changes on Mars Surface from High-Resolution Orbital Images

    Science.gov (United States)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter

    2017-04-01

    Over the last 40 years Mars has been extensively mapped by several NASA and ESA orbital missions, generating a large image dataset comprised of approximately 500,000 high-resolution images (of citizen science can be employed for training and verification it is unsuitable for planetwide systematic change detection. In this work, we introduce a novel approach in planetary image change detection, which involves a batch-mode automatic change detection pipeline that identifies regions that have changed. This is tested in anger, on tens of thousands of high-resolution images over the MC11 quadrangle [5], acquired by CTX, HRSC, THEMIS-VIS and MOC-NA instruments [1]. We will present results which indicate a substantial level of activity in this region of Mars, including instances of dynamic natural phenomena that haven't been cataloged in the planetary science literature before. We will demonstrate the potential and usefulness of such an automatic approach in planetary science change detection. Acknowledgments: The research leading to these results has received funding from the STFC "MSSL Consolidated Grant" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement n° 607379. References: [1] P. Sidiropoulos and J. - P. Muller (2015) On the status of orbital high-resolution repeat imaging of Mars for the observation of dynamic surface processes. Planetary and Space Science, 117: 207-222. [2] O. Aharonson, et al. (2003) Slope streak formation and dust deposition rates on Mars. Journal of Geophysical Research: Planets, 108(E12):5138 [3] A. McEwen, et al. (2011) Seasonal flows on warm martian slopes. Science, 333 (6043): 740-743. [4] S. Byrne, et al. (2009) Distribution of mid-latitude ground ice on mars from new impact craters. Science, 325(5948):1674-1676. [5] K. Gwinner, et al (2016) The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and

  19. A Simple Encryption Algorithm for Quantum Color Image

    Science.gov (United States)

    Li, Panchi; Zhao, Ya

    2017-06-01

    In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.

  20. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    Science.gov (United States)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  1. Image quality evaluation of medical color and monochrome displays using an imaging colorimeter

    Science.gov (United States)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    The purpose of this presentation is to demonstrate the means which permit examining the accuracy of Image Quality with respect to MTF (Modulation Transfer Function) and NPS (Noise Power Spectrum) of Color Displays and Monochrome Displays. Indications were in the past that color displays could affect the clinical performance of color displays negatively compared to monochrome displays. Now colorimeters like the PM-1423 are available which have higher sensitivity and color accuracy than the traditional cameras like CCD cameras. Reference (1) was not based on measurements made with a colorimeter. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future SPIE Conference.Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future Annual SPIE Conference. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. The Imaging Colorimeter. Measurement of color image quality needs were done with an imaging colorimeter as it is shown below. Imaging colorimetry is ideally suited to FPD measurement because imaging systems capture spatial data generating millions of data points in a single measurement operation. The imaging colorimeter which was used was the PM-1423 from Radiant Imaging. It uses

  2. Use of discrete chromatic space to tune the image tone in a color image mosaic

    Science.gov (United States)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li

    2003-09-01

    Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.

  3. The structure and properties of color spaces and the representation of color images

    CERN Document Server

    Dubois, Eric

    2009-01-01

    This lecture describes the author's approach to the representation of color spaces and their use for color image processing. The lecture starts with a precise formulation of the space of physical stimuli (light). The model includes both continuous spectra and monochromatic spectra in the form of Dirac deltas. The spectral densities are considered to be functions of a continuous wavelength variable. This leads into the formulation of color space as a three-dimensional vector space, with all the associated structure. The approach is to start with the axioms of color matching for normal human vie

  4. Munsell color analysis of Landsat color-ratio-composite images of limonitic areas in southwest New Mexico

    Science.gov (United States)

    Kruse, F. A.

    1985-01-01

    The causes of color variations in the green areas on Landsat 4/5-4/6-6/7 (red-blue-green) color-ratio-composite (CRC) images, defined as limonitic areas, were investigated by analyzing the CRC images of the Lordsburg, New Mexico area. The red-blue-green additive color system was mathematically transformed into the cylindrical Munsell color coordinates (hue, saturation, and value), and selected areas were digitally analyzed for color variation. The obtained precise color characteristics were then correlated with properties of surface material. The amount of limonite (L) visible to the sensor was found to be the primary cause of the observed color differences. The visible L is, is turn, affected by the amount of L on the material's surface and by within-pixel mixing of limonitic and nonlimonitic materials. The secondary cause of variation was vegetation density, which shifted CRC hues towards yellow-green, decreased saturation, and increased value.

  5. Region-Based Color Image Indexing and Retrieval

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper a region-based color image indexing and retrieval algorithm is presented. As a basis for the indexing, a novel K-Means segmentation algorithm is used, modified so as to take into account the coherence of the regions. A new color distance is also defined for this algorithm. Based on ....... Experimental results demonstrate the performance of the algorithm. The development of an intelligent image content-based search engine for the World Wide Web is also presented, as a direct application of the presented algorithm....

  6. Using color histogram normalization for recovering chromatic illumination-changed images.

    Science.gov (United States)

    Pei, S C; Tseng, C L; Wu, C C

    2001-11-01

    We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.

  7. Unsupervised color image segmentation using a lattice algebra clustering technique

    Science.gov (United States)

    Urcid, Gonzalo; Ritter, Gerhard X.

    2011-08-01

    In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.

  8. Naturalness and image quality : saturation and lightness variation in color images of natural scenes

    NARCIS (Netherlands)

    Ridder, de H.

    1996-01-01

    The relation between perceived image quality and naturalness was investigated by varying the colorfulness of natural images at various lightness levels. At each lightness level, subjects assessed perceived colorfulness, naturalness, and quality as a function of average saturation by means of direct

  9. Availability of color calibration for consistent color display in medical images and optimization of reference brightness for clinical use

    Science.gov (United States)

    Iwai, Daiki; Suganami, Haruka; Hosoba, Minoru; Ohno, Kazuko; Emoto, Yutaka; Tabata, Yoshito; Matsui, Norihisa

    2013-03-01

    Color image consistency has not been accomplished yet except the Digital Imaging and Communication in Medicine (DICOM) Supplement 100 for implementing a color reproduction pipeline and device independent color spaces. Thus, most healthcare enterprises could not check monitor degradation routinely. To ensure color consistency in medical color imaging, monitor color calibration should be introduced. Using simple color calibration device . chromaticity of colors including typical color (Red, Green, Blue, Green and White) are measured as device independent profile connection space value called u'v' before and after calibration. In addition, clinical color images are displayed and visual differences are observed. In color calibration, monitor brightness level has to be set to quite lower value 80 cd/m2 according to sRGB standard. As Maximum brightness of most color monitors available currently for medical use have much higher brightness than 80 cd/m2, it is not seemed to be appropriate to use 80 cd/m2 level for calibration. Therefore, we propose that new brightness standard should be introduced while maintaining the color representation in clinical use. To evaluate effects of brightness to chromaticity experimentally, brightness level is changed in two monitors from 80 to 270cd/m2 and chromaticity value are compared with each brightness levels. As a result, there are no significant differences in chromaticity diagram when brightness levels are changed. In conclusion, chromaticity is close to theoretical value after color calibration. Moreover, chromaticity isn't moved when brightness is changed. The results indicate optimized reference brightness level for clinical use could be set at high brightness in current monitors .

  10. A novel quantum steganography scheme for color images

    Science.gov (United States)

    Li, Panchi; Liu, Xiande

    In quantum image steganography, embedding capacity and security are two important issues. This paper presents a novel quantum steganography scheme using color images as cover images. First, the secret information is divided into 3-bit segments, and then each 3-bit segment is embedded into the LSB of one color pixel in the cover image according to its own value and using Gray code mapping rules. Extraction is the inverse of embedding. We designed the quantum circuits that implement the embedding and extracting process. The simulation results on a classical computer show that the proposed scheme outperforms several other existing schemes in terms of embedding capacity and security.

  11. A Hybrid DWT-SVD Image-Coding System (HDWTSVD for Color Images

    Directory of Open Access Journals (Sweden)

    Humberto Ochoa

    2003-04-01

    Full Text Available In this paper, we propose the HDWTSVD system to encode color images. Before encoding, the color components (RGB are transformed into YCbCr. Cb and Cr components are downsampled by a factor of two, both horizontally and vertically, before sending them through the encoder. A criterion based on the average standard deviation of 8x8 subblocks of the Y component is used to choose DWT or SVD for all the components. Standard test images are compressed based on the proposed algorithm.

  12. Superimpose of images by appending two simple video amplifier circuits to color television

    International Nuclear Information System (INIS)

    Kojima, Kazuhiko; Hiraki, Tatsunosuke; Koshida, Kichiro; Maekawa, Ryuichi; Hisada, Kinichi.

    1979-01-01

    Images are very useful to obtain diagnostic informations in medical fields. Also by superimposing two or three images obtained from the same patient, various informations, for example a degree of overlapping and anatomical land mark, which can not be found in only one image, can be often found. In this paper characteristics of our trial color television system for the purpose of superimposing x-ray images and/or radionuclide images are described. This color television system superimposing two images in each different color consists of two monochromatic vidicon cameras and 20 inches conventional color television in which only two simple video amplifier circuits are added. Signals from vidicon cameras are amplified about 40 dB and are directly applied to cathode terminals of color CRT in the television. This system is very simple and economical color displays, and enhance a degree of overlapping and displacement between images. As one of typical clinical applications, pancreas images were superimposed in color by this method. As a result, size and position of pancreas was enhanced. Also x-ray image and radionuclide image were superimposed to find exactly the position of tumors. Furthermore this system was very useful for color display of multinuclides scintigraphy. (author)

  13. Superimpose of images by appending two simple video amplifier circuits to color television

    Energy Technology Data Exchange (ETDEWEB)

    Kojima, K; Hiraki, T; Koshida, K; Maekawa, R [Kanazawa Univ. (Japan). School of Paramedicine; Hisada, K

    1979-09-01

    Images are very useful to obtain diagnostic informations in medical fields. Also by superimposing two or three images obtained from the same patient, various informations, for example a degree of overlapping and anatomical land mark, which can not be found in only one image, can be often found. In this paper characteristics of our trial color television system for the purpose of superimposing x-ray images and/or radionuclide images are described. This color television system superimposing two images in each different color consists of two monochromatic vidicon cameras and 20 inches conventional color television in which only two simple video amplifier circuits are added. Signals from vidicon cameras are amplified about 40 dB and are directly applied to cathode terminals of color CRT in the television. This system is very simple and economical color displays, and enhance a degree of overlapping and displacement between images. As one of typical clinical applications, pancreas images were superimposed in color by this method. As a result, size and position of pancreas was enhanced. Also x-ray image and radionuclide image were superimposed to find exactly the position of tumors. Furthermore this system was very useful for color display of multinuclides scintigraphy.

  14. Hybridizing Differential Evolution with a Genetic Algorithm for Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    R. V. V. Krishna

    2016-10-01

    Full Text Available This paper proposes a hybrid of differential evolution and genetic algorithms to solve the color image segmentation problem. Clustering based color image segmentation algorithms segment an image by clustering the features of color and texture, thereby obtaining accurate prototype cluster centers. In the proposed algorithm, the color features are obtained using the homogeneity model. A new texture feature named Power Law Descriptor (PLD which is a modification of Weber Local Descriptor (WLD is proposed and further used as a texture feature for clustering. Genetic algorithms are competent in handling binary variables, while differential evolution on the other hand is more efficient in handling real parameters. The obtained texture feature is binary in nature and the color feature is a real value, which suits very well the hybrid cluster center optimization problem in image segmentation. Thus in the proposed algorithm, the optimum texture feature centers are evolved using genetic algorithms, whereas the optimum color feature centers are evolved using differential evolution.

  15. Color-Image Classification Using MRFs for an Outdoor Mobile Robot

    Directory of Open Access Journals (Sweden)

    Moises Alencastre-Miranda

    2005-02-01

    Full Text Available In this paper, we suggest to use color-image classification (in several phases using Markov Random Fields (MRFs in order to understand natural images from outdoor environment's scenes for a mobile robot. We skip preprocessing phase having same results and better performance. In segmentation phase, we implement a color segmentation method considering I3 color space measure average in little image's cells obtained from a single split step. In classification phase, a MRF was used to identify regions as one of three selected classes; here, we consider at the same time the intrinsic color features of the image and the neighborhood system between image's cells. Finally, we use region growing and contextual information to correct misclassification errors. We have implemented and tested those phases with several images taken at our campus' gardens. We include some results in off-line processing mode and in on-line execution mode on an outdoor mobile robot. The vision system has been used for reactive exploration in an outdoor environment.

  16. Research on Methods of Infrared and Color Image Fusion Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Zhao Rentao

    2014-06-01

    Full Text Available There is significant difference in the imaging features of infrared image and color image, but their fusion images also have very good complementary information. In this paper, based on the characteristics of infrared image and color image, first of all, wavelet transform is applied to the luminance component of the infrared image and color image. In multi resolution the relevant regional variance is regarded as the activity measure, relevant regional variance ratio as the matching measure, and the fusion image is enhanced in the process of integration, thus getting the fused images by final synthesis module and multi-resolution inverse transform. The experimental results show that the fusion image obtained by the method proposed in this paper is better than the other methods in keeping the useful information of the original infrared image and the color information of the original color image. In addition, the fusion image has stronger adaptability and better visual effect.

  17. MARS Spectral Imaging: From High-Energy Physics to a Biomedical Business

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Abstract MARS spectral scanners provide colour X-Ray images. Current MARS pre-clinical scanners enable researchers and clinicians to measure biochemical and physiological processes in specimens, and animal models of disease. The scanners have developed from a 10 year scientific collaboration between New Zealand and CERN. In parallel a company, MARS Bioimaging Ltd, was founded to commercialise the technology by productising the scanner and selling it to biomedical users around the world. The New Zealand team is now more than 30 people including staff and students from the fields of physics, engineering, computing, maths, radiology, cardiology, biochemistry, oncology, and orthopaedics. Current work with pre-clinical scanners has concluded that the technology will be  useful in heart disease, stroke, arthritis, joint replacements, and cancer. In late 2014, the government announced funding for NZ to build a MARS scanner capable of imaging humans. Bio Professor Anthony Butler is a radiologist wit...

  18. Feeding People's Curiosity: Leveraging the Cloud for Automatic Dissemination of Mars Images

    Science.gov (United States)

    Knight, David; Powell, Mark

    2013-01-01

    Smartphones and tablets have made wireless computing ubiquitous, and users expect instant, on-demand access to information. The Mars Science Laboratory (MSL) operations software suite, MSL InterfaCE (MSLICE), employs a different back-end image processing architecture compared to that of the Mars Exploration Rovers (MER) in order to better satisfy modern consumer-driven usage patterns and to offer greater server-side flexibility. Cloud services are a centerpiece of the server-side architecture that allows new image data to be delivered automatically to both scientists using MSLICE and the general public through the MSL website (http://mars.jpl.nasa.gov/msl/).

  19. Color image definition evaluation method based on deep learning method

    Science.gov (United States)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  20. Compact Micro-Imaging Spectrometer (CMIS): Investigation of Imaging Spectroscopy and Its Application to Mars Geology and Astrobiology

    Science.gov (United States)

    Staten, Paul W.

    2005-01-01

    Future missions to Mars will attempt to answer questions about Mars' geological and biological history. The goal of the CMIS project is to design, construct, and test a capable, multi-spectral micro-imaging spectrometer use in such missions. A breadboard instrument has been constructed with a micro-imaging camera and Several multi-wavelength LED illumination rings. Test samples have been chosen for their interest to spectroscopists, geologists and astrobiologists. Preliminary analysis has demonstrated the advantages of isotropic illumination and micro-imaging spectroscopy over spot spectroscopy.

  1. A Color Image Watermarking Scheme Resistant against Geometrical Attacks

    Directory of Open Access Journals (Sweden)

    Y. Xing

    2010-04-01

    Full Text Available The geometrical attacks are still a problem for many digital watermarking algorithms at present. In this paper, we propose a watermarking algorithm for color images resistant to geometrical distortions (rotation and scaling. The singular value decomposition is used for watermark embedding and extraction. The log-polar map- ping (LPM and phase correlation method are used to register the position of geometrical distortion suffered by the watermarked image. Experiments with different kinds of color images and watermarks demonstrate that the watermarking algorithm is robust to common image processing attacks, especially geometrical attacks.

  2. Single underwater image enhancement based on color cast removal and visibility restoration

    Science.gov (United States)

    Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian

    2016-05-01

    Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.

  3. Segmentation and Classification of Burn Color Images

    Science.gov (United States)

    2001-10-25

    SEGMENTATION AND CLASSIFICATION OF BURN COLOR IMAGES Begoña Acha1, Carmen Serrano1, Laura Roa2 1Área de Teoría de la Señal y Comunicaciones ...2000, Las Vegas (USA), pp. 411-415. [21] G. Wyszecki and W.S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae (New

  4. Segmentation and Classification of Burn Color Images

    National Research Council Canada - National Science Library

    Acha, Begonya

    2001-01-01

    .... In the classification part, we take advantage of color information by clustering, with a vector quantization algorithm, the color centroids of small squares, taken from the burnt segmented part of the image, in the (V1, V2) plane into two possible groups, where V1 and V2 are the two chrominance components of the CIE Lab representation.

  5. Mars-Moons Exploration, Reconnaissance and Landed Investigation (MERLIN)

    Science.gov (United States)

    Murchie, S. L.; Chabot, N. L.; Buczkowski, D.; Arvidson, R. E.; Castillo, J. C.; Peplowski, P. N.; Ernst, C. M.; Rivkin, A.; Eng, D.; Chmielewski, A. B.; Maki, J.; trebi-Ollenu, A.; Ehlmann, B. L.; Spence, H. E.; Horanyi, M.; Klingelhoefer, G.; Christian, J. A.

    2015-12-01

    The Mars-Moons Exploration, Reconnaissance and Landed Investigation (MERLIN) is a NASA Discovery mission proposal to explore the moons of Mars. Previous Mars-focused spacecraft have raised fundamental questions about Mars' moons: What are their origins and compositions? Why do the moons resemble primitive outer solar system D-type objects? How do geologic processes modify their surfaces? MERLIN answers these questions through a combination of orbital and landed measurements, beginning with reconnaissance of Deimos and investigation of the hypothesized Martian dust belts. Orbital reconnaissance of Phobos occurs, followed by low flyovers to characterize a landing site. MERLIN lands on Phobos, conducting a 90-day investigation. Radiation measurements are acquired throughout all mission phases. Phobos' size and mass provide a low-risk landing environment: controlled descent is so slow that the landing is rehearsed, but gravity is high enough that surface operations do not require anchoring. Existing imaging of Phobos reveals low regional slope regions suitable for landing, and provides knowledge for planning orbital and landed investigations. The payload leverages past NASA investments. Orbital imaging is accomplished by a dual multispectral/high-resolution imager rebuilt from MESSENGER/MDIS. Mars' dust environment is measured by the refurbished engineering model of LADEE/LDEX, and the radiation environment by the flight spare of LRO/CRaTER. The landed workspace is characterized by a color stereo imager updated from MER/HazCam. MERLIN's arm deploys landed instrumentation using proven designs from MER, Phoenix, and MSL. Elemental measurements are acquired by a modified version of Rosetta/APXS, and an uncooled gamma-ray spectrometer. Mineralogical measurements are acquired by a microscopic imaging spectrometer developed under MatISSE. MERLIN delivers seminal science traceable to NASA's Strategic Goals and Objectives, Science Plan, and the Decadal Survey. MERLIN's science

  6. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  7. Adaptive pseudo-color enhancement method of weld radiographic images based on HSI color space and self-transformation of pixels

    Science.gov (United States)

    Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong

    2017-06-01

    The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.

  8. Adaptive pseudo-color enhancement method of weld radiographic images based on HSI color space and self-transformation of pixels.

    Science.gov (United States)

    Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong

    2017-06-01

    The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.

  9. Science applications of a multispectral microscopic imager for the astrobiological exploration of Mars.

    Science.gov (United States)

    Núñez, Jorge I; Farmer, Jack D; Sellar, R Glenn; Swayze, Gregg A; Blaney, Diana L

    2014-02-01

    Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars. Mars-Microscopic imager-Multispectral imaging-Spectroscopy-Habitability-Arm instrument.

  10. INTEGRATION OF SPATIAL INFORMATION WITH COLOR FOR CONTENT RETRIEVAL OF REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    Bikesh Kumar Singh

    2010-08-01

    Full Text Available There is rapid increase in image databases of remote sensing images due to image satellites with high resolution, commercial applications of remote sensing & high available bandwidth in last few years. The problem of content-based image retrieval (CBIR of remotely sensed images presents a major challenge not only because of the surprisingly increasing volume of images acquired from a wide range of sensors but also because of the complexity of images themselves. In this paper, a software system for content-based retrieval of remote sensing images using RGB and HSV color spaces is presented. Further, we also compare our results with spatiogram based content retrieval which integrates spatial information along with color histogram. Experimental results show that the integration of spatial information in color improves the image analysis of remote sensing data. In general, retrievals in HSV color space showed better performance than in RGB color space.

  11. Content-based quality evaluation of color images: overview and proposals

    Science.gov (United States)

    Tremeau, Alain; Richard, Noel; Colantoni, Philippe; Fernandez-Maloigne, Christine

    2003-12-01

    The automatic prediction of perceived quality from image data in general, and the assessment of particular image characteristics or attributes that may need improvement in particular, becomes an increasingly important part of intelligent imaging systems. The purpose of this paper is to propose to the color imaging community in general to develop a software package available on internet to help the user to select among all these approaches which is better appropriated to a given application. The ultimate goal of this project is to propose, next to implement, an open and unified color imaging system to set up a favourable context for the evaluation and analysis of color imaging processes. Many different methods for measuring the performance of a process have been proposed by different researchers. In this paper, we will discuss the advantages and shortcomings of most of main analysis criteria and performance measures currently used. The aim is not to establish a harsh competition between algorithms or processes, but rather to test and compare the efficiency of methodologies firstly to highlight strengths and weaknesses of a given algorithm or methodology on a given image type and secondly to have these results publicly available. This paper is focused on two important unsolved problems. Why it is so difficult to select a color space which gives better results than another one? Why it is so difficult to select an image quality metric which gives better results than another one, with respect to the judgment of the Human Visual System? Several methods used either in color imaging or in image quality will be thus discussed. Proposals for content-based image measures and means of developing a standard test suite for will be then presented. The above reference advocates for an evaluation protocol based on an automated procedure. This is the ultimate goal of our proposal.

  12. MAHLI on Mars: lessons learned operating a geoscience camera on a landed payload robotic arm

    Science.gov (United States)

    Aileen Yingst, R.; Edgett, Kenneth S.; Kennedy, Megan R.; Krezoski, Gillian M.; McBride, Marie J.; Minitti, Michelle E.; Ravine, Michael A.; Williams, Rebecca M. E.

    2016-06-01

    The Mars Hand Lens Imager (MAHLI) is a 2-megapixel, color camera with resolution as high as 13.9 µm pixel-1. MAHLI has operated successfully on the Martian surface for over 1150 Martian days (sols) aboard the Mars Science Laboratory (MSL) rover, Curiosity. During that time MAHLI acquired images to support science and science-enabling activities, including rock and outcrop textural analysis; sand characterization to further the understanding of global sand properties and processes; support of other instrument observations; sample extraction site documentation; range-finding for arm and instrument placement; rover hardware and instrument monitoring and safety; terrain assessment; landscape geomorphology; and support of rover robotic arm commissioning. Operation of the instrument has demonstrated that imaging fully illuminated, dust-free targets yields the best results, with complementary information obtained from shadowed images. The light-emitting diodes (LEDs) allow satisfactory night imaging but do not improve daytime shadowed imaging. MAHLI's combination of fine-scale, science-driven resolution, RGB color, the ability to focus over a large range of distances, and relatively large field of view (FOV), have maximized the return of science and science-enabling observations given the MSL mission architecture and constraints.

  13. SU-E-I-13: Evaluation of Metal Artifact Reduction (MAR) Software On Computed Tomography (CT) Images

    International Nuclear Information System (INIS)

    Huang, V; Kohli, K

    2015-01-01

    Purpose: A new commercially available metal artifact reduction (MAR) software in computed tomography (CT) imaging was evaluated with phantoms in the presence of metals. The goal was to assess the ability of the software to restore the CT number in the vicinity of the metals without impacting the image quality. Methods: A Catphan 504 was scanned with a GE Optima RT 580 CT scanner (GE Healthcare, Milwaukee, WI) and the images were reconstructed with and without the MAR software. Both datasets were analyzed with Image Owl QA software (Image Owl Inc, Greenwich, NY). CT number sensitometry, MTF, low contrast, uniformity, noise and spatial accuracy were compared for scans with and without MAR software. In addition, an in-house made phantom was scanned with and without a stainless steel insert at three different locations. The accuracy of the CT number and metal insert dimension were investigated as well. Results: Comparisons between scans with and without MAR algorithm on the Catphan phantom demonstrate similar results for image quality. However, noise was slightly higher for the MAR algorithm. Evaluation of the CT number at various locations of the in-house made phantom was also performed. The baseline HU, obtained from the scan without metal insert, was compared to scans with the stainless steel insert at 3 different locations. The HU difference between the baseline scan versus metal scan was improved when the MAR algorithm was applied. In addition, the physical diameter of the stainless steel rod was over-estimated by the MAR algorithm by 0.9 mm. Conclusion: This work indicates with the presence of metal in CT scans, the MAR algorithm is capable of providing a more accurate CT number without compromising the overall image quality. Future work will include the dosimetric impact on the MAR algorithm

  14. Valles Marineris, Mars: High-Resolution Digital Terrain Model on the basis of Mars-Express HRSC data

    Science.gov (United States)

    Dumke, A.; Spiegel, M.; van Gasselt, S.; Neukum, G.

    2009-04-01

    Introduction: Since December 2003, the European Space Agency's (ESA) Mars Express (MEX) orbiter has been investigating Mars. The High Resolution Stereo Camera (HRSC), one of the scientific experiments onboard MEX, is a pushbroom stereo color scanning instrument with nine line detectors, each equipped with 5176 CCD sensor elements. Five CCD lines operate with panchromatic filters and four lines with red, green, blue and infrared filters at different observation angles [1]. MEX has a highly elliptical near-polar orbit and reaches a distance of 270 km at periapsis. Ground resolution of image data predominantly varies with respect to spacecraft altitude and the chosen macro-pixel format. Usually, although not exclusively, the nadir channel provides full resolution of up to 10 m per pixel. Stereo-, photometry and color channels generally have a coarser resolution. One of the goals for MEX HRSC is to cover Mars globally in color and stereoscopically at high-resolution. So far, HRSC has covered almost half of the surface of Mars at a resolution better than 20 meters per pixel. Such data are utilized to derive high resolution digital terrain models (DTM), ortho-image mosaics and additionally higher-level 3D data products such as 3D views. Standardized high-resolution single-strip digital terrain models (using improved orientation data) have been derived at the German Aerospace Center (DLR) in Berlin-Adlershof [2]. Those datasets, i.e. high-resolution digital terrain models as well as ortho-image data, are distributed as Vicar image files (http://www-mipl.jpl.nasa.gov/external/vicar.html) via the HRSCview web-interface [3], accessible at http://hrscview.fu-berlin.de. A systematic processing workflow is described in detail in [4,5]. In consideration of the scientific interest, the processing of the Valles Marineris region will be discussed in this paper. The DTM mosaic was derived from 82 HRSC orbits at approximately -22° S to 1° N and 250° to 311° E. Methods: Apart from

  15. A fast color image enhancement algorithm based on Max Intensity Channel

    Science.gov (United States)

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  16. Using color and grayscale images to teach histology to color-deficient medical students.

    Science.gov (United States)

    Rubin, Lindsay R; Lackey, Wendy L; Kennedy, Frances A; Stephenson, Robert B

    2009-01-01

    Examination of histologic and histopathologic microscopic sections relies upon differential colors provided by staining techniques, such as hematoxylin and eosin, to delineate normal tissue components and to identify pathologic alterations in these components. Given the prevalence of color deficiency (commonly called "color blindness") in the general population, it is likely that this reliance upon color differentiation poses a significant obstacle for several medical students beginning a course of study that includes examination of histologic slides. In the past, first-year medical students at Michigan State University who identified themselves as color deficient were encouraged to use color transparency overlays or tinted contact lenses to filter out problematic colors. Recently, however, we have offered such students a computer monitor adjusted to grayscale for in-lab work, as well as grayscale copies of color photomicrographs for examination purposes. Grayscale images emphasize the texture of tissues and the contrasts between tissues as the students learn histologic architecture. Using this approach, color-deficient students have quickly learned to compensate for their deficiency by focusing on cell and tissue structure rather than on color variation. Based upon our experience with color-deficient students, we believe that grayscale photomicrographs may also prove instructional for students with normal (trichromatic) color vision, by encouraging them to consider structural characteristics of cells and tissues that may otherwise be overshadowed by stain colors.

  17. A natural-color mapping for single-band night-time image based on FPGA

    Science.gov (United States)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  18. A locally adaptive algorithm for shadow correction in color images

    Science.gov (United States)

    Karnaukhov, Victor; Kober, Vitaly

    2017-09-01

    The paper deals with correction of color images distorted by spatially nonuniform illumination. A serious distortion occurs in real conditions when a part of the scene containing 3D objects close to a directed light source is illuminated much brighter than the rest of the scene. A locally-adaptive algorithm for correction of shadow regions in color images is proposed. The algorithm consists of segmentation of shadow areas with rank-order statistics followed by correction of nonuniform illumination with human visual perception approach. The performance of the proposed algorithm is compared to that of common algorithms for correction of color images containing shadow regions.

  19. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    Science.gov (United States)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  20. Mechanical Aqueous Alteration Dominates Textures of Gale Crater Rocks: Mars Hand Lens Imager (MAHLI) Results

    Science.gov (United States)

    Aileen Yingst, R.; Minitti, Michelle; Edgett, Kenneth; McBride, Marie; Stack, Kathryn

    2015-04-01

    The Mars Hand Lens Imager (MAHLI) acquired sub-mm/pixel scale color images of over 70 individual rocks and outcrops during Curiosity's first year on Mars, permitting the study of textures down to the distinction between silt and very fine sand. We group imaged rock textures into classes based on their grain size, sorting, matrix characteristics, and abundance of pores. Because the recent campaign at Pahrump Hills acquired many more MAHLI images than elsewhere along the rover traverse [6], textural analysis there is more detailed and thus types observed there are sub-divided. Mudstones: These rocks contain framework grains smaller than the highest resolution MAHLI images (16 μm/pixel), and thus are interpreted to consist of grains that are silt-sized or smaller. Some rocks contain nodules, sulfate veins, and Mg-enriched erosionally-resistant ridges. The Pahrump Hills region contains mudstones of at least four different sub-textures: recessive massive, recessive parallel-laminated, resistant laminated-to-massive, and resistant cross-stratified. Recessive mudstones are slope-forming; parallel-laminated recessive mudstones display mm-scale parallel (and in some cases rhythmic) lamination that extends laterally for many meters, and are interbedded with recessive massive mudstones. Coarse cm- to mm-scale laminae appear within resistant mudstones though some portions are more massive; laminae tend to be traceable for cm to meters. Well-sorted sandstones: Rocks in this class are made of gray, fine-to-medium sand and exhibit little to no porosity. Two examples of this class show fine lineations with sub-mm spacing. Aillik, a target in the Shaler outcrop, shows abundant cross-lamination. The Pahrump Hills region contains a sub-texture of well-sorted, very fine to fine-grained cross-stratified sandstone at the dune and ripple-scale. Poorly-sorted sandstones. This class is subdivided into two sub-classes: rounded, coarse-to-very coarse sand grains of variable colors and

  1. Variational Histogram Equalization for Single Color Image Defogging

    Directory of Open Access Journals (Sweden)

    Li Zhou

    2016-01-01

    Full Text Available Foggy images taken in the bad weather inevitably suffer from contrast loss and color distortion. Existing defogging methods merely resort to digging out an accurate scene transmission in ignorance of their unpleasing distortion and high complexity. Different from previous works, we propose a simple but powerful method based on histogram equalization and the physical degradation model. By revising two constraints in a variational histogram equalization framework, the intensity component of a fog-free image can be estimated in HSI color space, since the airlight is inferred through a color attenuation prior in advance. To cut down the time consumption, a general variation filter is proposed to obtain a numerical solution from the revised framework. After getting the estimated intensity component, it is easy to infer the saturation component from the physical degradation model in saturation channel. Accordingly, the fog-free image can be restored with the estimated intensity and saturation components. In the end, the proposed method is tested on several foggy images and assessed by two no-reference indexes. Experimental results reveal that our method is relatively superior to three groups of relevant and state-of-the-art defogging methods.

  2. The iMars WebGIS - Spatio-Temporal Data Queries and Single Image Map Web Services

    Science.gov (United States)

    Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Muller, Jan-Peter; van Gasselt, Stephan; Sidiropoulos, Panagiotis; Lanz-Kroechert, Julia

    2017-04-01

    Server backend which in turn delivers the response back to the MapCache instance. Web frontend: We have implemented a web-GIS frontend based on various OpenLayers components. The basemap is a global color-hillshaded HRSC bundle-adjusted DTM mosaic with a resolution of 50 m per pixel. The new bundle-block-adjusted qudrangle mosaics of the MC-11 quadrangle, both image and DTM, are included with opacity slider options. The layer user interface has been adapted on the base of the ol3-layerswitcher and extended by foldable and switchable groups, layer sorting (by resolution, by time and alphabeticallly) and reordering (drag-and-drop). A collapsible time panel accomodates a time slider interface where the user can filter the visible data by a range of Mars or Earth dates and/or by solar longitudes. The visualisation of time-series of single images is controlled by a specific toolbar enabling the workflow of image selection (by point or bounding box), dynamic image loading and playback of single images in a video player-like environment. During a stress-test campaign we could demonstrate that the system is capable of serving up to 10 simultaneous users on its current lightweight development hardware. It is planned to relocate the software to more powerful hardware by the time of this conference. Conclusions/Outlook: The iMars webGIS is an expert tool for the detection and visualization of surface changes. We demonstrate a technique to dynamically retrieve and display single images based on the time-series structure of the data. Together with the multi-temporal database and its MapServer/MapCache backend it provides a stable and high performance environment for the dissemination of the various iMars products. Acknowledgements: This research has received funding from the EU's FP7 Programme under iMars 607379 and by the German Space Agency (DLR Bonn), grant 50 QM 1301 (HRSC on Mars Express).

  3. An instructional guide for leaf color analysis using digital imaging software

    Science.gov (United States)

    Paula F. Murakami; Michelle R. Turner; Abby K. van den Berg; Paul G. Schaberg

    2005-01-01

    Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. We developed and tested a new method of digital image analysis that uses Scion Image or NIH image public domain software to quantify leaf color. This...

  4. Color image digitization and analysis for drum inspection

    International Nuclear Information System (INIS)

    Muller, R.C.; Armstrong, G.A.; Burks, B.L.; Kress, R.L.; Heckendorn, F.M.; Ward, C.R.

    1993-01-01

    A rust inspection system that uses color analysis to find rust spots on drums has been developed. The system is composed of high-resolution color video equipment that permits the inspection of rust spots on the order of 0.25 cm (0.1-in.) in diameter. Because of the modular nature of the system design, the use of open systems software (X11, etc.), the inspection system can be easily integrated into other environmental restoration and waste management programs. The inspection system represents an excellent platform for the integration of other color inspection and color image processing algorithms

  5. A fuzzy art neural network based color image processing and ...

    African Journals Online (AJOL)

    To improve the learning process from the input data, a new learning rule was suggested. In this paper, a new method is proposed to deal with the RGB color image pixels, which enables a Fuzzy ART neural network to process the RGB color images. The application of the algorithm was implemented and tested on a set of ...

  6. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    Science.gov (United States)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  7. Fusion of lens-free microscopy and mobile-phone microscopy images for high-color-accuracy and high-resolution pathology imaging

    Science.gov (United States)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2017-03-01

    Digital pathology and telepathology require imaging tools with high-throughput, high-resolution and accurate color reproduction. Lens-free on-chip microscopy based on digital in-line holography is a promising technique towards these needs, as it offers a wide field of view (FOV >20 mm2) and high resolution with a compact, low-cost and portable setup. Color imaging has been previously demonstrated by combining reconstructed images at three discrete wavelengths in the red, green and blue parts of the visible spectrum, i.e., the RGB combination method. However, this RGB combination method is subject to color distortions. To improve the color performance of lens-free microscopy for pathology imaging, here we present a wavelet-based color fusion imaging framework, termed "digital color fusion microscopy" (DCFM), which digitally fuses together a grayscale lens-free microscope image taken at a single wavelength and a low-resolution and low-magnification color-calibrated image taken by a lens-based microscope, which can simply be a mobile phone based cost-effective microscope. We show that the imaging results of an H&E stained breast cancer tissue slide with the DCFM technique come very close to a color-calibrated microscope using a 40x objective lens with 0.75 NA. Quantitative comparison showed 2-fold reduction in the mean color distance using the DCFM method compared to the RGB combination method, while also preserving the high-resolution features of the lens-free microscope. Due to the cost-effective and field-portable nature of both lens-free and mobile-phone microscopy techniques, their combination through the DCFM framework could be useful for digital pathology and telepathology applications, in low-resource and point-of-care settings.

  8. Hyperspectral imaging using a color camera and its application for pathogen detection

    Science.gov (United States)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image

  9. Perceptual distortion analysis of color image VQ-based coding

    Science.gov (United States)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  10. Automatic color preference correction for color reproduction

    Science.gov (United States)

    Tsukada, Masato; Funayama, Chisato; Tajima, Johji

    2000-12-01

    The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.

  11. Spatial characterization of nanotextured surfaces by visual color imaging

    DEFF Research Database (Denmark)

    Feidenhans'l, Nikolaj Agentoft; Murthy, Swathi; Madsen, Morten H.

    2016-01-01

    We present a method using an ordinary color camera to characterize nanostructures from the visual color of the structures. The method provides a macroscale overview image from which micrometer-sized regions can be analyzed independently, hereby revealing long-range spatial variations...

  12. Objective Color Classification of Ecstasy Tablets by Hyperspectral Imaging

    NARCIS (Netherlands)

    Edelman, Gerda; Lopatka, Martin; Aalders, Maurice

    2013-01-01

    The general procedure followed in the examination of ecstasy tablets for profiling purposes includes a color description, which depends highly on the observers' perception. This study aims to provide objective quantitative color information using visible hyperspectral imaging. Both self-manufactured

  13. Modeling the Process of Color Image Recognition Using ART2 Neural Network

    Directory of Open Access Journals (Sweden)

    Todor Petkov

    2015-09-01

    Full Text Available This paper thoroughly describes the use of unsupervised adaptive resonance theory ART2 neural network for the purposes of image color recognition of x-ray images and images taken by nuclear magnetic resonance. In order to train the network, the pixel values of RGB colors are regarded as learning vectors with three values, one for red, one for green and one for blue were used. At the end the trained network was tested by the values of pictures and determines the design, or how to visualize the converted picture. As a result we had the same pictures with colors according to the network. Here we use the generalized net to prepare a model that describes the process of the color image recognition.

  14. "Los peregrinajes de los feminismos de color en el pensamiento de María Lugones"

    OpenAIRE

    Bidaseca,Karina

    2014-01-01

    María Lugones, prestigiosa filósofa feminista argentina, obtuvo su Ph.D. en filosofía y ciencia política por la University of Wisconsin, EEUU. Su compromiso con las Mujeres de Color en ese país comienza en la década de 1960, con las luchas emancipatorias del movimiento negro por los derechos civiles. Este ensayo piensa en clave de “peregrinajes” algunos de los conceptos y sistemas categoriales más importantes acuñados por Lugones, a partir de dos textos fundamentales de la filó...

  15. Restoration of color in a remote sensing image and its quality evaluation

    Science.gov (United States)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  16. Learning effective color features for content based image retrieval in dermatology

    NARCIS (Netherlands)

    Bunte, Kerstin; Biehl, Michael; Jonkman, Marcel F.; Petkov, Nicolai

    We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn

  17. Topomapping of Mars with HRSC images, ISIS, and a commercial stereo workstation

    Science.gov (United States)

    Kirk, R. L.; Howington-Kraus, E.; Galuszka, D.; Redding, B.; Hare, T. M.

    HRSC on Mars Express [1] is the first camera designed specifically for stereo imaging to be used in mapping a planet other than the Earth. Nine detectors view the planet through a single lens to obtain four-band color coverage and stereo images at 3 to 5 distinct angles in a single pass over the target. The short interval between acquisition of the images ensures that changes that could interfere with stereo matching are minimized. The resolution of the nadir channel is 12.5 m at periapsis, poorer at higher points in the elliptical orbit. The stereo channels are typically operated at 2x coarser resolution and the color channels at 4x or 8x. Since the commencement of operations in January 2004, approximately 58% of Mars has been imaged at nadir resolutions better than 50 m/pixel. This coverage is expected to increase significantly during the recently approved extended mission of Mars Express, giving the HRSC dataset enormous potential for regional and even global mapping. Systematic processing of the HRSC images is carried out at the German Aerospace Center (DLR) in Berlin. Preliminary digital topographic models (DTMs) at 200 m/post resolution and orthorectified image products are produced in near-realtime for all orbits, by using the VICAR software system [2]. The tradeoff of universal coverage but limited DTM resolution makes these products optimal for many but not all research studies. Experiments on adaptive processing with the same software, for a limited number of orbits, have allowed DTMs of higher resolution (down to 50 m/post) to be produced [3]. In addition, numerous Co-Investigators on the HRSC team (including ourselves) are actively researching techniques to improve on the standard products, by such methods as bundle adjustment, alternate approaches to stereo DTM generation, and refinement of DTMs by photoclinometry (shape-from-shading) [4]. The HRSC team is conducting a systematic comparison of these alternative processing approaches by arranging for

  18. Just Noticeable Distortion Model and Its Application in Color Image Watermarking

    Science.gov (United States)

    Liu, Kuo-Cheng

    In this paper, a perceptually adaptive watermarking scheme for color images is proposed in order to achieve robustness and transparency. A new just noticeable distortion (JND) estimator for color images is first designed in the wavelet domain. The key issue of the JND model is to effectively integrate visual masking effects. The estimator is an extension to the perceptual model that is used in image coding for grayscale images. Except for the visual masking effects given coefficient by coefficient by taking into account the luminance content and the texture of grayscale images, the crossed masking effect given by the interaction between luminance and chrominance components and the effect given by the variance within the local region of the target coefficient are investigated such that the visibility threshold for the human visual system (HVS) can be evaluated. In a locally adaptive fashion based on the wavelet decomposition, the estimator applies to all subbands of luminance and chrominance components of color images and is used to measure the visibility of wavelet quantization errors. The subband JND profiles are then incorporated into the proposed color image watermarking scheme. Performance in terms of robustness and transparency of the watermarking scheme is obtained by means of the proposed approach to embed the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme with inserting watermarks into luminance and chrominance components is more robust than the existing scheme while retaining the watermark transparency.

  19. CMEIAS color segmentation: an improved computing technology to process color images for quantitative microbial ecology studies at single-cell resolution.

    Science.gov (United States)

    Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B

    2010-02-01

    Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most

  20. An Improved Filtering Method for Quantum Color Image in Frequency Domain

    Science.gov (United States)

    Li, Panchi; Xiao, Hong

    2018-01-01

    In this paper we investigate the use of quantum Fourier transform (QFT) in the field of image processing. We consider QFT-based color image filtering operations and their applications in image smoothing, sharpening, and selective filtering using quantum frequency domain filters. The underlying principle used for constructing the proposed quantum filters is to use the principle of the quantum Oracle to implement the filter function. Compared with the existing methods, our method is not only suitable for color images, but also can flexibly design the notch filters. We provide the quantum circuit that implements the filtering task and present the results of several simulation experiments on color images. The major advantages of the quantum frequency filtering lies in the exploitation of the efficient implementation of the quantum Fourier transform.

  1. Color model comparative analysis for breast cancer diagnosis using H and E stained images

    Science.gov (United States)

    Li, Xingyu; Plataniotis, Konstantinos N.

    2015-03-01

    Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.

  2. Mars, High-Resolution Digital Terrain Model Quadrangles on the Basis of Mars-Express HRSC Data

    Science.gov (United States)

    Dumke, A.; Spiegel, M.; van Gasselt, S.; Neu, D.; Neukum, G.

    2010-05-01

    Introduction: Since December 2003, the European Space Agency's (ESA) Mars Express (MEX) orbiter has been investigating Mars. The High Resolution Stereo Camera (HRSC), one of the scientific experiments onboard MEX, is a pushbroom stereo color scanning instrument with nine line detectors, each equipped with 5176 CCD sensor elements [1,2]. One of the goals for MEX HRSC is to cover Mars globally in color and stereoscopically at high-resolution. So far, HRSC has covered half of the surface of Mars at a resolution better than 20 meters per pixel. HRSC data allows to derive high-resolution digital terrain models (DTM), color-orthoimage mosaics and additionally higher-level 3D data products. Past work concentrated on producing regional data mosaics for areas of scientific interest in a single strip and/or bundle block adjustment and deriving DTMs [3]. The next logical step, based on substantially the same procedure, is to systematically expand the derivation of DTMs and orthoimage data to the 140 map quadrangle scheme (Q-DTM). Methods: The division of the Mars surface into 140 quadrangles is briefly described in Greeley and Batson [4] and based upon the standard MC 30 (Mars Chart) system. The quadrangles are named by alpha-numerical labels. The workflow for the determination of new orientation data for the derivation of digital terrain models takes place in two steps. First, for each HRSC orbits covering a quadrangle, new exterior orientation parameters are determined [5,6]. The successfully classified exterior orientation parameters become the input for the next step in which the exterior orientation parameters are determined together in a bundle block adjustment. Only those orbit strips which have a sufficient overlap area and a certain number of tie points can be used in a common bundle block adjustment. For the automated determination of tie points, software provided by the Leibniz Universität Hannover [7] is used. Results: For the derivation of Q-DTMs and ortho-image

  3. Joint depth map and color consistency estimation for stereo images with different illuminations and cameras.

    Science.gov (United States)

    Heo, Yong Seok; Lee, Kyoung Mu; Lee, Sang Uk

    2013-05-01

    Abstract—In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.

  4. P1-14: Relationship between Colorfulness Adaptation and Spatial Frequency Components in Natural Image

    Directory of Open Access Journals (Sweden)

    Shun Sakaibara

    2012-10-01

    Full Text Available We previously found the effect of colorfulness-adaptation in natural images. It was observed to be stronger in natural images than unnatural images, suggesting the influence of naturalness on the adaptation. However, what characteristics of images and what levels of visual system were involved were not examined enough. This research investigates whether the effect of colorfulness-adaptation is associated with spatial frequency components in natural images. If adaptation was a mechanism in early cortical level, the effect would be strong for adaptation and test images sharing similar spatial frequency components. In the experiment, we examined how the colorfulness impression of a test image changed following adaptation images with different levels of saturation. We selected several types of natural image from a standard image database for test and adaptation images. We also processed them to make shuffled images with spatial frequency component differed from the originals and phase-scrambled images with the component similar to the originals, for both adaptation and test images. Observers evaluated whether a test image was colorful or faded. Results show that the colorfulness perception of the test images was influenced by the saturation of the adaptation images. The effect was the strongest for the combination of natural (original adaptation and natural test images regardless of image types. The effect for the combination of phase-scrambled images was weaker than those of original images and stronger than those of shuffled images. They suggest that not only the spatial frequency components of an image but also the recognition of images would contribute to colorfulness-adaptation.

  5. Color sensitivity of the multi-exposure HDR imaging process

    Science.gov (United States)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  6. A Linear Criterion to sort Color Components in Images

    Directory of Open Access Journals (Sweden)

    Leonardo Barriga Rodriguez

    2017-01-01

    Full Text Available The color and its representation play a basic role in Image Analysis process. Several methods can be beneficial whenever they have a correct representation of wave-length variations used to represent scenes with a camera. A wide variety of spaces and color representations is founded in specialized literature. Each one is useful in concrete circumstances and others may offer redundant color information (for instance, all RGB components are high correlated. This work deals with the task of identifying and sorting which component from several color representations offers the majority of information about the scene. This approach is based on analyzing linear dependences among each color component, by the implementation of a new sorting algorithm based on entropy. The proposal is tested in several outdoor/indoor scenes with different light conditions. Repeatability and stability are tested in order to guarantee its use in several image analysis applications. Finally, the results of this work have been used to enhance an external algorithm to compensate the camera random vibrations.

  7. Multi-clues image retrieval based on improved color invariants

    Science.gov (United States)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  8. Featured Image: Revealing Hidden Objects with Color

    Science.gov (United States)

    Kohler, Susanna

    2018-02-01

    Stunning color astronomical images can often be the motivation for astronomers to continue slogging through countless data files, calculations, and simulations as we seek to understand the mysteries of the universe. But sometimes the stunning images can, themselves, be the source of scientific discovery. This is the case with the below image of Lynds Dark Nebula 673, located in the Aquila constellation, that was captured with the Mayall 4-meter telescope at Kitt Peak National Observatory by a team of scientists led by Travis Rector (University of Alaska Anchorage). After creating the image with a novel color-composite imaging method that reveals faint H emission (visible in red in both images here), Rector and collaborators identified the presence of a dozen new Herbig-Haro objects small cloud patches that are caused when material is energetically flung out from newly born stars. The image adapted above shows three of the new objects, HH 118789, aligned with two previously known objects, HH 32 and 332 suggesting they are driven by the same source. For more beautiful images and insight into the authors discoveries, check out the article linked below!Full view of Lynds Dark Nebula 673. Click for the larger view this beautiful composite image deserves! [T.A. Rector (University of Alaska Anchorage) and H. Schweiker (WIYN and NOAO/AURA/NSF)]CitationT. A. Rector et al 2018 ApJ 852 13. doi:10.3847/1538-4357/aa9ce1

  9. Color and textural quality of packaged wild rocket measured by multispectral imaging

    DEFF Research Database (Denmark)

    Løkke, Mette Marie; Seefeldt, Helene Fast; Skov, Thomas

    2013-01-01

    Green color and texture are important attributes for the perception of freshness of wild rocket. Packaging of green leafy vegetables can postpone senescence and yellowing, but a drawback is the risk of anaerobic respiration leading to loss of tissue integrity and development of an olive-brown color....... The hypothesis underlying this paper is that color and textural quality of packaged wild rocket leaves can be predicted by multispectral imaging for faster evaluation of visual quality of leafy green vegetables in scientific experiments. Multispectral imaging was correlated to sensory evaluation of packaged wild...... rocket quality. CIELAB values derived from the multispectral images and from a spectrophotometer changed during storage, but the data were insufficient to describe variation in sensory perceived color and texture. CIELAB values from the multispectral images allowed for a more detailed determination...

  10. Natural-color and color-infrared image mosaics of the Colorado River corridor in Arizona derived from the May 2009 airborne image collection

    Science.gov (United States)

    Davis, Philip A.

    2013-01-01

    The Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey (USGS) periodically collects airborne image data for the Colorado River corridor within Arizona (fig. 1) to allow scientists to study the impacts of Glen Canyon Dam water release on the corridor’s natural and cultural resources. These data are collected from just above Glen Canyon Dam (in Lake Powell) down to the entrance of Lake Mead, for a total distance of 450 kilometers (km) and within a 500-meter (m) swath centered on the river’s mainstem and its seven main tributaries (fig. 1). The most recent airborne data collection in 2009 acquired image data in four wavelength bands (blue, green, red, and near infrared) at a spatial resolution of 20 centimeters (cm). The image collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits. Davis (2012) reported on the performance of the SH52 sensor and on the processing steps required to produce the nearly flawless four-band image mosaic (sectioned into map tiles) for the river corridor. The final image mosaic has a total of only 3 km of surface defects in addition to some areas of cloud shadow because of persistent inclement weather during data collection. The 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River. Some analyses of these image mosaics do not require the full 12-bit dynamic range or all four bands of the calibrated image database, in which atmospheric scattering (or haze) had not been removed from the four bands. To provide scientists and the general public with image products that are more useful for visual interpretation, the 12-bit image data were converted to 8-bit natural-color and color-infrared images, which also removed atmospheric scattering within each wavelength-band image. The conversion required an evaluation of the

  11. HDR imaging and color constancy: two sides of the same coin?

    Science.gov (United States)

    McCann, John J.

    2011-01-01

    At first, we think that High Dynamic Range (HDR) imaging is a technique for improved recordings of scene radiances. Many of us think that human color constancy is a variation of a camera's automatic white balance algorithm. However, on closer inspection, glare limits the range of light we can detect in cameras and on retinas. All scene regions below middle gray are influenced, more or less, by the glare from the bright scene segments. Instead of accurate radiance reproduction, HDR imaging works well because it preserves the details in the scene's spatial contrast. Similarly, on closer inspection, human color constancy depends on spatial comparisons that synthesize appearances from all the scene segments. Can spatial image processing play similar principle roles in both HDR imaging and color constancy?

  12. One Mars year: viking lander imaging observations.

    Science.gov (United States)

    Jones, K L; Arvidson, R E; Guinness, E A; Bragg, S L; Wall, S D; Carlston, C E; Pidek, D G

    1979-05-25

    Throughout the complete Mars year during which they have been on the planet, the imaging systems aboard the two Viking landers have documented a variety of surface changes. Surface condensates, consisting of both solid H(2)O and CO(2), formed at the Viking 2 lander site during the winter. Additional observations suggest that surface erosion rates due to dust redistribution may be substantially less than those predicted on the basis of pre-Viking observations. The Viking 1 lander will continue to acquire and transmit a predetermined sequence of imaging and meteorology data as long as it is operative.

  13. Digital color image encoding and decoding using a novel chaotic random generator

    International Nuclear Information System (INIS)

    Nien, H.H.; Huang, C.K.; Changchien, S.K.; Shieh, H.W.; Chen, C.T.; Tuan, Y.Y.

    2007-01-01

    This paper proposes a novel chaotic system, in which variables are treated as encryption keys in order to achieve secure transmission of digital color images. Since the dynamic response of chaotic system is highly sensitive to the initial values of a system and to the variation of a parameter, and chaotic trajectory is so unpredictable, we use elements of variables as encryption keys and apply these to computer internet communication of digital color images. As a result, we obtain much higher communication security. We adopt one statistic method involving correlation coefficient γ and FIPS PUB 140-1 to test on the distribution of distinguished elements of variables for continuous-time chaotic system, and accordingly select optimal encryption keys to use in secure communication of digital color images. At the transmitter end, we conduct RGB level decomposition on digital color images, and encrypt them with chaotic keys, and finally transmit them through computer internet. The same encryption keys are used to decrypt and recover the original images at the receiver end. Even if the encrypted images are stolen in the public channel, an intruder is not able to decrypt and recover the original images because of the lack of adequate encryption keys. Empirical example shows that the chaotic system and encryption keys applied in the encryption, transmission, decryption, and recovery of digital color images can achieve higher communication security and best recovered images

  14. Color Image Quality Assessment Based on CIEDE2000

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2012-01-01

    Full Text Available Combining the color difference formula of CIEDE2000 and the printing industry standard for visual verification, we present an objective color image quality assessment method correlated with subjective vision perception. An objective score conformed to subjective perception (OSCSP Q was proposed to directly reflect the subjective visual perception. In addition, we present a general method to calibrate correction factors of color difference formula under real experimental conditions. Our experiment results show that the present DE2000-based metric can be consistent with human visual system in general application environment.

  15. On Adapting the Tensor Voting Framework to Robust Color Image Denoising

    Science.gov (United States)

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Julià, Carme

    This paper presents an adaptation of the tensor voting framework for color image denoising, while preserving edges. Tensors are used in order to encode the CIELAB color channels, the uniformity and the edginess of image pixels. A specific voting process is proposed in order to propagate color from a pixel to its neighbors by considering the distance between pixels, the perceptual color difference (by using an optimized version of CIEDE2000), a uniformity measurement and the likelihood of the pixels being impulse noise. The original colors are corrected with those encoded by the tensors obtained after the voting process. Peak to noise ratios and visual inspection show that the proposed methodology has a better performance than state-of-the-art techniques.

  16. Giga-pixel lensfree holographic microscopy and tomography using color image sensors.

    Directory of Open Access Journals (Sweden)

    Serhan O Isikman

    Full Text Available We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2. This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total. Furthermore, by changing the illumination angle (e.g., ± 50° and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3 across a sample volume of ~5 mm(3, which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.

  17. COMPARATIVE STUDY OF EDGE BASED LSB MATCHING STEGANOGRAPHY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    A.J. Umbarkar

    2016-02-01

    Full Text Available Steganography is a very pivotal technique mainly used for covert transfer of information over a covert communication channel. This paper proposes a significant comparative study of the spatial LSB domain technique that focuses on sharper edges of the color as well as gray scale images for the purpose of data hiding and hides secret message first in sharper edge regions and then in smooth regions of the image. Message embedding depends on content of the image and message size. The experimental results illustrate that, for low embedding rate the method hides the message in sharp edges of cover image to get better stego image visualization quality. For high embedding rate, smooth regions and edges of the cover image are used for the purpose of data hiding. In this steganography method, color image and textured kind of image preserves better visual quality of stego image. The novelty of the comparative study is that, it helps to analyze the efficiency and performance of the method as it gives better results because it directly works on color images instead of converting to gray scale image.

  18. Comparison of Color Model in Cotton Image Under Conditions of Natural Light

    Science.gov (United States)

    Zhang, J. H.; Kong, F. T.; Wu, J. Z.; Wang, S. W.; Liu, J. J.; Zhao, P.

    Although the color images contain a large amount of information reflecting the species characteristics, different color models also get different information. The selection of color models is the key to separating crops from background effectively and rapidly. Taking the cotton images collected under natural light as the object, we convert the color components of RGB color model, HSL color model and YIQ color model respectively. Then, we use subjective evaluation and objective evaluation methods, evaluating the 9 color components of conversion. It is concluded that the Q component of the soil, straw and plastic film region gray values remain the same without larger fluctuation when using subjective evaluation method. In the objective evaluation, we use the variance method, average gradient method, gray prediction objective evaluation error statistics method and information entropy method respectively to find the minimum numerical of Q color component suitable for background segmentation.

  19. a New Graduation Algorithm for Color Balance of Remote Sensing Image

    Science.gov (United States)

    Zhou, G.; Liu, X.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Pan, Q.

    2018-05-01

    In order to expand the field of view to obtain more data and information when doing research on remote sensing image, workers always need to mosaicking images together. However, the image after mosaic always has the large color differences and produces the gap line. This paper based on the graduation algorithm of tarigonometric function proposed a new algorithm of Two Quarter-rounds Curves (TQC). The paper uses the Gaussian filter to solve the program about the image color noise and the gap line. The paper used one of Greenland compiled data acquired in 1963 from Declassified Intelligence Photography Project (DISP) by ARGON KH-5 satellite, and used the photography of North Gulf, China, by Landsat satellite to experiment. The experimental results show that the proposed method has improved the accuracy of the results in two parts: on the one hand, for the large color differences remote sensing image will become more balanced. On the other hands, the remote sensing image will achieve more smooth transition.

  20. SUPERVISED AUTOMATIC HISTOGRAM CLUSTERING AND WATERSHED SEGMENTATION. APPLICATION TO MICROSCOPIC MEDICAL COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    Olivier Lezoray

    2011-05-01

    Full Text Available In this paper, an approach to the segmentation of microscopic color images is addressed, and applied to medical images. The approach combines a clustering method and a region growing method. Each color plane is segmented independently relying on a watershed based clustering of the plane histogram. The marginal segmentation maps intersect in a label concordance map. The latter map is simplified based on the assumption that the color planes are correlated. This produces a simplified label concordance map containing labeled and unlabeled pixels. The formers are used as an image of seeds for a color watershed. This fast and robust segmentation scheme is applied to several types of medical images.

  1. Color image encryption using random transforms, phase retrieval, chaotic maps, and diffusion

    Science.gov (United States)

    Annaby, M. H.; Rushdi, M. A.; Nehary, E. A.

    2018-04-01

    The recent tremendous proliferation of color imaging applications has been accompanied by growing research in data encryption to secure color images against adversary attacks. While recent color image encryption techniques perform reasonably well, they still exhibit vulnerabilities and deficiencies in terms of statistical security measures due to image data redundancy and inherent weaknesses. This paper proposes two encryption algorithms that largely treat these deficiencies and boost the security strength through novel integration of the random fractional Fourier transforms, phase retrieval algorithms, as well as chaotic scrambling and diffusion. We show through detailed experiments and statistical analysis that the proposed enhancements significantly improve security measures and immunity to attacks.

  2. The imaging performance of the SRC on Mars Express

    Science.gov (United States)

    Oberst, J.; Schwarz, G.; Behnke, T.; Hoffmann, H.; Matz, K.-D.; Flohrer, J.; Hirsch, H.; Roatsch, T.; Scholten, F.; Hauber, E.; Brinkmann, B.; Jaumann, R.; Williams, D.; Kirk, R.; Duxbury, T.; Leu, C.; Neukum, G.

    2008-01-01

    The Mars Express spacecraft carries the pushbroom scanner high-resolution stereo camera (HRSC) and its added imaging subsystem super resolution channel (SRC). The SRC is equipped with its own optical system and a 1024??1024 framing sensor. SRC produces snapshots with 2.3 m ground pixel size from the nominal spacecraft pericenter height of 250 km, which are typically embedded in the central part of the large HRSC scenes. The salient features of the SRC are its light-weight optics, a reliable CCD detector, and high-speed read-out electronics. The quality and effective visibility of details in the SRC images unfortunately falls short of what has been expected. In cases where thermal balance cannot be reached, artifacts, such as blurring and "ghost features" are observed in the images. In addition, images show large numbers of blemish pixels and are plagued by electronic noise. As a consequence, we have developed various image improving algorithms, which are discussed in this paper. While results are encouraging, further studies of image restoration by dedicated processing appear worthwhile. The SRC has obtained more than 6940 images at the time of writing (1 September 2007), which often show fascinating details in surface morphology. SRC images are highly useful for a variety of applications in planetary geology, for studies of the Mars atmosphere, and for astrometric observations of the Martian satellites. This paper will give a full account of the design philosophy, technical concept, calibration, operation, integration with HRSC, and performance, as well as science accomplishments of the SRC. ?? 2007 Elsevier Ltd. All rights reserved.

  3. Color image analysis of contaminants and bacteria transport in porous media

    Science.gov (United States)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Daemi, Mohammad F.; Cole, Larry; Dickenson, Eric

    1997-10-01

    Transport of contaminants and bacteria in aqueous heterogeneous saturated porous systems have been studied experimentally using a novel fluorescent microscopic imaging technique. The approach involves color visualization and quantification of bacterium and contaminant distributions within a transparent porous column. By introducing stained bacteria and an organic dye as a contaminant into the column and illuminating the porous regions with a planar sheet of laser beam, contaminant and bacterial transport processes through the porous medium can be observed and measured microscopically. A computer controlled color CCD camera is used to record the fluorescent images as a function of time. These images are recorded by a frame accurate high resolution VCR and are then analyzed using a color image analysis code written in our laboratories. The color images are digitized this way and simultaneous concentration and velocity distributions of both contaminant and bacterium are evaluated as a function of time and pore characteristics. The approach provides a unique dynamic probe to observe these transport processes microscopically. These results are extremely valuable in in-situ bioremediation problems since microscopic particle-contaminant- bacterium interactions are the key to understanding and optimization of these processes.

  4. Copyright Protection of Color Imaging Using Robust-Encoded Watermarking

    Directory of Open Access Journals (Sweden)

    M. Cedillo-Hernandez

    2015-04-01

    Full Text Available In this paper we present a robust-encoded watermarking method applied to color images for copyright protection, which presents robustness against several geometric and signal processing distortions. Trade-off between payload, robustness and imperceptibility is a very important aspect which has to be considered when a watermark algorithm is designed. In our proposed scheme, previously to be embedded into the image, the watermark signal is encoded using a convolutional encoder, which can perform forward error correction achieving better robustness performance. Then, the embedding process is carried out through the discrete cosine transform domain (DCT of an image using the image normalization technique to accomplish robustness against geometric and signal processing distortions. The embedded watermark coded bits are extracted and decoded using the Viterbi algorithm. In order to determine the presence or absence of the watermark into the image we compute the bit error rate (BER between the recovered and the original watermark data sequence. The quality of the watermarked image is measured using the well-known indices: Peak Signal to Noise Ratio (PSNR, Visual Information Fidelity (VIF and Structural Similarity Index (SSIM. The color difference between the watermarked and original images is obtained by using the Normalized Color Difference (NCD measure. The experimental results show that the proposed method provides good performance in terms of imperceptibility and robustness. The comparison among the proposed and previously reported methods based on different techniques is also provided.

  5. A kind of color image segmentation algorithm based on super-pixel and PCNN

    Science.gov (United States)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  6. Principles of image processing in machine vision systems for the color analysis of minerals

    Science.gov (United States)

    Petukhova, Daria B.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Korotaev, Valery V.

    2014-09-01

    At the moment color sorting method is one of promising methods of mineral raw materials enrichment. This method is based on registration of color differences between images of analyzed objects. As is generally known the problem with delimitation of close color tints when sorting low-contrast minerals is one of the main disadvantages of color sorting method. It is can be related with wrong choice of a color model and incomplete image processing in machine vision system for realizing color sorting algorithm. Another problem is a necessity of image processing features reconfiguration when changing the type of analyzed minerals. This is due to the fact that optical properties of mineral samples vary from one mineral deposit to another. Therefore searching for values of image processing features is non-trivial task. And this task doesn't always have an acceptable solution. In addition there are no uniform guidelines for determining criteria of mineral samples separation. It is assumed that the process of image processing features reconfiguration had to be made by machine learning. But in practice it's carried out by adjusting the operating parameters which are satisfactory for one specific enrichment task. This approach usually leads to the fact that machine vision system unable to estimate rapidly the concentration rate of analyzed mineral ore by using color sorting method. This paper presents the results of research aimed at addressing mentioned shortcomings in image processing organization for machine vision systems which are used to color sorting of mineral samples. The principles of color analysis for low-contrast minerals by using machine vision systems are also studied. In addition, a special processing algorithm for color images of mineral samples is developed. Mentioned algorithm allows you to determine automatically the criteria of mineral samples separation based on an analysis of representative mineral samples. Experimental studies of the proposed algorithm

  7. Toward optimal color image quality of television display

    Science.gov (United States)

    MacDonald, Lindsay W.; Endrikhovski, Sergej N.; Bech, Soren; Jensen, Kaj

    1999-12-01

    A general framework and first experimental results are presented for the `OPTimal IMage Appearance' (OPTIMA) project, which aims to develop a computational model for achieving optimal color appearance of natural images on adaptive CRT television displays. To achieve this goal we considered the perceptual constraints determining quality of displayed images and how they could be quantified. The practical value of the notion of optimal image appearance was translated from the high level of the perceptual constraints into a method for setting the display's parameters at the physical level. In general, the whole framework of quality determination includes: (1) evaluation of perceived quality; (2) evaluation of the individual perceptual attributes; and (3) correlation between the physical measurements, psychometric parameters and the subjective responses. We performed a series of psychophysical experiments, with observers viewing a series of color images on a high-end consumer television display, to investigate the relationships between Overall Image Quality and four quality-related attributes: Brightness Rendering, Chromatic Rendering, Visibility of Details and Overall Naturalness. The results of the experiments presented in this paper suggest that these attributes are highly inter-correlated.

  8. Fuzzy Logic-Based Filter for Removing Additive and Impulsive Noise from Color Images

    Science.gov (United States)

    Zhu, Yuhong; Li, Hongyang; Jiang, Huageng

    2017-12-01

    This paper presents an efficient filter method based on fuzzy logics for adaptively removing additive and impulsive noise from color images. The proposed filter comprises two parts including noise detection and noise removal filtering. In the detection part, the fuzzy peer group concept is applied to determine what type of noise is added to each pixel of the corrupted image. In the filter part, the impulse noise is deducted by the vector median filter in the CIELAB color space and an optimal fuzzy filter is introduced to reduce the Gaussian noise, while they can work together to remove the mixed Gaussian-impulse noise from color images. Experimental results on several color images proves the efficacy of the proposed fuzzy filter.

  9. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    Science.gov (United States)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  10. Preparing Colorful Astronomical Images II

    Science.gov (United States)

    Levay, Z. G.; Frattare, L. M.

    2002-12-01

    We present additional techniques for using mainstream graphics software (Adobe Photoshop and Illustrator) to produce composite color images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope to produce photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to present more detail and additional techniques, taking advantage of new or improved features available in the latest software versions. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels.

  11. Color Segmentation Approach of Infrared Thermography Camera Image for Automatic Fault Diagnosis

    International Nuclear Information System (INIS)

    Djoko Hari Nugroho; Ari Satmoko; Budhi Cynthia Dewi

    2007-01-01

    Predictive maintenance based on fault diagnosis becomes very important in current days to assure the availability and reliability of a system. The main purpose of this research is to configure a computer software for automatic fault diagnosis based on image model acquired from infrared thermography camera using color segmentation approach. This technique detects hot spots in equipment of the plants. Image acquired from camera is first converted to RGB (Red, Green, Blue) image model and then converted to CMYK (Cyan, Magenta, Yellow, Key for Black) image model. Assume that the yellow color in the image represented the hot spot in the equipment, the CMYK image model is then diagnosed using color segmentation model to estimate the fault. The software is configured utilizing Borland Delphi 7.0 computer programming language. The performance is then tested for 10 input infrared thermography images. The experimental result shows that the software capable to detect the faulty automatically with performance value of 80 % from 10 sheets of image input. (author)

  12. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern.

    Science.gov (United States)

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-06-28

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.

  13. Seepage phenomena on Mars at subzero temperature

    Science.gov (United States)

    Kereszturi, Akos; Möhlmann, Diedrich; Berczi, Szaniszlo; Ganti, Tibor; Horvath, Andras; Kuti, Adrienn; Pocs, Tamas; Sik, Andras; Szathmary, Eors

    At the southern hemisphere of Mars seasonal slope structures emanating from Dark Dune Spots are visible on MGS MOC, and MRO HiRISE images. Based on their analysis two groups of streaks could be identified: diffuse and fan shaped ones forming in an earlier phase of local spring, probably by CO2 gas jets, and confined streaks forming only on steep slopes during a later seasonal phase. The dark color of the streaks may arise from the dark color of the dune grains where surface frost disappeared above them, or caused by the phase change of the water ice to liquid-like water, or even it may be influenced by the solutes of salts in the undercooled interfacial water The second group's morphology (meandering style, ponds at their end), morphometry, and related theoretical modelling suggest they may form by undercooled water that remains in liquid phase in a thin layer around solid grains. We analyzed sequence of images, temperature and topographic data of Russel (54S 12E), Richardson (72S 180E) and an unnamed crater (68S 2E) during southern spring. The dark streaks here show slow motion, with an average speed of meter/day, when the maximal daytime temperature is between 190 and 220 K. Based on thermophysical considerations a thin layer of interfacial water is inevitable on mineral surfaces under the present conditions of Mars. With 10 precipitable micrometer of atmospheric water vapor, liquid phase can be present down about 190 K. Under such conditions dark streaks may form by the movement of grains lubricatred by interfacial water. This possibility have various consequences on chemical, mechanical or even possible astrobiological processes on Mars. Acknowledgment: This work was supported by the ESA ECS-project No. 98004 and the Pro Renovanda Cultura Hungariae Foundation.

  14. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    Science.gov (United States)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  15. 'McMurdo' Panorama from Spirit's 'Winter Haven' (False Color)

    Science.gov (United States)

    2006-01-01

    This 360-degree view, called the 'McMurdo' panorama, comes from the panoramic camera (Pancam) on NASA's Mars Exploration Rover Spirit. From April through October 2006, Spirit has stayed on a small hill known as 'Low Ridge.' There, the rover's solar panels are tilted toward the sun to maintain enough solar power for Spirit to keep making scientific observations throughout the winter on southern Mars. This view of the surroundings from Spirit's 'Winter Haven' is presented in exaggerated color to enhance color differences among rocks, soils and sand. Oct. 26, 2006, marks Spirit's 1,000th sol of what was planned as a 90-sol mission. (A sol is a Martian day, which lasts 24 hours, 39 minutes, 35 seconds). The rover has lived through the most challenging part of its second Martian winter. Its solar power levels are rising again. Spring in the southern hemisphere of Mars will begin in early 2007. Before that, the rover team hopes to start driving Spirit again toward scientifically interesting places in the 'Inner Basin' and 'Columbia Hills' inside Gusev crater. The McMurdo panorama is providing team members with key pieces of scientific and topographic information for choosing where to continue Spirit's exploration adventure. The Pancam began shooting component images of this panorama during Spirit's sol 814 (April 18, 2006) and completed the part shown here on sol 932 (Aug. 17, 2006). The panorama was acquired using all 13 of the Pancam's color filters, using lossless compression for the red and blue stereo filters, and only modest levels of compression on the remaining filters. The overall panorama consists of 1,449 Pancam images and represents a raw data volume of nearly 500 megabytes. It is thus the largest, highest-fidelity view of Mars acquired from either rover. Additional photo coverage of the parts of the rover deck not shown here was completed on sol 980 (Oct. 5 , 2006). The team is completing the processing and mosaicking of those final pieces of the panorama

  16. Imaging tristimulus colorimeter for the evaluation of color in printed textiles

    Science.gov (United States)

    Hunt, Martin A.; Goddard, James S., Jr.; Hylton, Kathy W.; Karnowski, Thomas P.; Richards, Roger K.; Simpson, Marc L.; Tobin, Kenneth W., Jr.; Treece, Dale A.

    1999-03-01

    The high-speed production of textiles with complicated printed patterns presents a difficult problem for a colorimetric measurement system. Accurate assessment of product quality requires a repeatable measurement using a standard color space, such as CIELAB, and the use of a perceptually based color difference formula, e.g. (Delta) ECMC color difference formula. Image based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. This research and development effort describes a benchtop, proof-of-principle system that implements a projection onto convex sets (POCS) algorithm for mapping component color measurements to standard tristimulus values and incorporates structural and color based segmentation for improved precision and accuracy. The POCS algorithm consists of determining the closed convex sets that describe the constraints on the reconstruction of the true tristimulus values based on the measured imperfect values. We show that using a simulated D65 standard illuminant, commercial filters and a CCD camera, accurate (under perceptibility limits) per-region based (Delta) ECMC values can be measured on real textile samples.

  17. Color Texture Image Retrieval Based on Local Extrema Features and Riemannian Distance

    Directory of Open Access Journals (Sweden)

    Minh-Tan Pham

    2017-10-01

    Full Text Available A novel efficient method for content-based image retrieval (CBIR is developed in this paper using both texture and color features. Our motivation is to represent and characterize an input image by a set of local descriptors extracted from characteristic points (i.e., keypoints within the image. Then, dissimilarity measure between images is calculated based on the geometric distance between the topological feature spaces (i.e., manifolds formed by the sets of local descriptors generated from each image of the database. In this work, we propose to extract and use the local extrema pixels as our feature points. Then, the so-called local extrema-based descriptor (LED is generated for each keypoint by integrating all color, spatial as well as gradient information captured by its nearest local extrema. Hence, each image is encoded by an LED feature point cloud and Riemannian distances between these point clouds enable us to tackle CBIR. Experiments performed on several color texture databases including Vistex, STex, color Brodazt, USPtex and Outex TC-00013 using the proposed approach provide very efficient and competitive results compared to the state-of-the-art methods.

  18. Consistency and standardization of color in medical imaging: a consensus report.

    Science.gov (United States)

    Badano, Aldo; Revie, Craig; Casertano, Andrew; Cheng, Wei-Chung; Green, Phil; Kimpe, Tom; Krupinski, Elizabeth; Sisson, Christye; Skrøvseth, Stein; Treanor, Darren; Boynton, Paul; Clunie, David; Flynn, Michael J; Heki, Tatsuo; Hewitt, Stephen; Homma, Hiroyuki; Masia, Andy; Matsui, Takashi; Nagy, Balázs; Nishibori, Masahiro; Penczek, John; Schopf, Thomas; Yagi, Yukako; Yokoi, Hideto

    2015-02-01

    This article summarizes the consensus reached at the Summit on Color in Medical Imaging held at the Food and Drug Administration (FDA) on May 8-9, 2013, co-sponsored by the FDA and ICC (International Color Consortium). The purpose of the meeting was to gather information on how color is currently handled by medical imaging systems to identify areas where there is a need for improvement, to define objective requirements, and to facilitate consensus development of best practices. Participants were asked to identify areas of concern and unmet needs. This summary documents the topics that were discussed at the meeting and recommendations that were made by the participants. Key areas identified where improvements in color would provide immediate tangible benefits were those of digital microscopy, telemedicine, medical photography (particularly ophthalmic and dental photography), and display calibration. Work in these and other related areas has been started within several professional groups, including the creation of the ICC Medical Imaging Working Group.

  19. A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.

    Science.gov (United States)

    Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif

    2012-08-01

    This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.

  20. A pilot study of three dimensional color CT images of brain diseases to improve informed consent

    International Nuclear Information System (INIS)

    Tanizaki, Yoshio; Akiyama, Takenori; Hiraga, Kenji; Akaji, Kazunori

    2005-01-01

    We have described brain diseases to patients and their family using monochrome CT images. It is thought that patients have difficulties in giving their consent to our conventional explanation because their understanding of brain diseases is based on three dimensional and color images, however, standard CT images are two dimensional and gray scale images. We have been trying to use three dimensional color CT images to improve the typical patient's comprehension of brain diseases. We also try to simulate surgery using these images. Multi-slice CT accumulates precise isotropic voxel data within a half minute. These two dimensional and monochrome data are converted to three dimensional color CT images by 3D workstation. Three dimensional color CT images of each brain structures (e.g. scalp, skull, brain, ventricles and lesions) are created separately. Then, selected structures are fused together for different purposes. These images are able to rotate around any axis. Because the methods to generate three-dimensional color images have not established, we neurosurgeons must create these images. In particular, when an operation is required, the surgeon should create the images. In this paper, we demonstrate how three-dimensional color CT images can improve informed consent. (author)

  1. Community tools for cartographic and photogrammetric processing of Mars Express HRSC images

    Science.gov (United States)

    Kirk, Randolph L.; Howington-Kraus, Elpitha; Edmundson, Kenneth L.; Redding, Bonnie L.; Galuszka, Donna M.; Hare, Trent M.; Gwinner, K.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.

    2017-01-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged ~ 77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was

  2. Colorization and automated segmentation of human T2 MR brain images for characterization of soft tissues.

    Directory of Open Access Journals (Sweden)

    Muhammad Attique

    Full Text Available Characterization of tissues like brain by using magnetic resonance (MR images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii a segmentation method (both hard and soft segmentation to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM, white matter (WM, and cerebrospinal fluid (CSF using prior anatomical knowledge. Results have been successfully validated on human T2-weighted (T2 brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described.

  3. A robust color image watermarking algorithm against rotation attacks

    Science.gov (United States)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  4. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    Science.gov (United States)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  5. Limb clouds and dust on Mars from images obtained by the Visual Monitoring Camera (VMC) onboard Mars Express

    Science.gov (United States)

    Sánchez-Lavega, A.; Chen-Chen, H.; Ordoñez-Etxeberria, I.; Hueso, R.; del Río-Gaztelurrutia, T.; Garro, A.; Cardesín-Moinelo, A.; Titov, D.; Wood, S.

    2018-01-01

    The Visual Monitoring Camera (VMC) onboard the Mars Express (MEx) spacecraft is a simple camera aimed to monitor the release of the Beagle-2 lander on Mars Express and later used for public outreach. Here, we employ VMC as a scientific instrument to study and characterize high altitude aerosols events (dust and condensates) observed at the Martian limb. More than 21,000 images taken between 2007 and 2016 have been examined to detect and characterize elevated layers of dust in the limb, dust storms and clouds. We report a total of 18 events for which we give their main properties (areographic location, maximum altitude, limb projected size, Martian solar longitude and local time of occurrence). The top altitudes of these phenomena ranged from 40 to 85 km and their horizontal extent at the limb ranged from 120 to 2000 km. They mostly occurred at Equatorial and Tropical latitudes (between ∼30°N and 30°S) at morning and afternoon local times in the southern fall and northern winter seasons. None of them are related to the orographic clouds that typically form around volcanoes. Three of these events have been studied in detail using simultaneous images taken by the MARCI instrument onboard Mars Reconnaissance Orbiter (MRO) and studying the properties of the atmosphere using the predictions from the Mars Climate Database (MCD) General Circulation Model. This has allowed us to determine the three-dimensional structure and nature of these events, with one of them being a regional dust storm and the two others water ice clouds. Analyses based on MCD and/or MARCI images for the other cases studied indicate that the rest of the events correspond most probably to water ice clouds.

  6. Color-coded MR imaging phase velocity mapping with the Pixar image processor

    International Nuclear Information System (INIS)

    Singleton, H.R.; Cranney, G.B.; Pohost, G.M.

    1989-01-01

    The authors have developed a graphic interaction technique in which a mouse and cursor are used to assign colors to phase-sensitive MR images of the heart. Two colors are used, one for flow in the positive direction, another for flow in the negative direction. A lookup table is generated interactively by manipulating lines representing ramps superimposed on an intensity histogram. Intensity is made to vary with flow magnitude in each color's direction. Coded series of the ascending and descending aorta, and of two- and four-chamber views of the heart, have been generated. In conjunction with movie display, flow dynamics, especially changes in direction, are readily apparent

  7. Color management systems: methods and technologies for increased image quality

    Science.gov (United States)

    Caretti, Maria

    1997-02-01

    All the steps in the imaging chain -- from handling the originals in the prepress to outputting them on any device - - have to be well calibrated and adjusted to each other, in order to reproduce color images in a desktop environment as accurate as possible according to the original. Today most of the steps in the prepress production are digital and therefore it is realistic to believe that the color reproduction can be well controlled. This is true thanks to the last years development of fast, cost effective scanners, digital sources and digital proofing devices not the least. It is likely to believe that well defined tools and methods to control this imaging flow will lead to large cost and time savings as well as increased overall image quality. Until now, there has been a lack of good, reliable, easy-to- use systems (e.g. hardware, software, documentation, training and support) in an extent that has made them accessible to the large group of users of graphic arts production systems. This paper provides an overview of the existing solutions to manage colors in a digital pre-press environment. Their benefits and limitations are discussed as well as how they affect the production workflow and organization. The difference between a color controlled environment and one that is not is explained.

  8. The Athena Science Payload for the 2003 Mars Exploration Rovers

    Science.gov (United States)

    Squyres, S. W.; Arvidson, R. E.; Bell, J. F., III; Carr, M.; Christensen, P.; DesMarais, D.; Economou, T.; Gorevan, S.; Haskin, L.; Herkenhoff, K.

    2001-01-01

    The Athena Mars rover payload is a suite of scientific instruments and tools for geologic exploration of the martian surface. It is designed to: (1) Provide color stereo imaging of martian surface environments, and remotely-sensed point discrimination of mineralogical composition. (2) Determine the elemental and mineralogical composition of martian surface materials, including soils, rock surfaces, and rock interiors. (3) Determine the fine-scale textural properties of these materials. Two identical copies of the Athena payload will be flown in 2003 on the two Mars Exploration Rovers. The payload is at a high state of maturity, and first copies of several of the instruments have already been built and tested for flight.

  9. Color Views of Soil Scooped on Sol 9

    Science.gov (United States)

    2008-01-01

    These three color views show the Robotic Arm scoop from NASA's Phoenix Mars Lander. The image shows a handful of Martian soil dug from the digging site informally called 'Knave of Hearts,' from the trench informally called 'Dodo,' on the ninth Martian day of the mission, or Sol 9 (June 3, 2008). 'Dodo' is the same site as the earlier test trench dug on the seventh Martian day of the mission, or Sol 7 (June 1, 2008). The Robotic Arm Camera took the three color views at different focus positions. Scientists can better study soil structure and estimate how much soil was collected by taking multiple images at different foci. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  10. Endosonographic and color doppler flow imaging alterations observed within irradiated rectal cancer

    International Nuclear Information System (INIS)

    Alexander, Archie A.; Palazzo, Juan P.; Ahmad, Neelofur R.; Liu, J.-B.; Forsberg, Flemming; Marks, John

    1996-01-01

    Purpose: To correlate the endosonographic and color Doppler flow imaging alterations observed in irradiated rectal cancers with the pathologic features of radiation response, and to evaluate the potential impact of altered blood flow on the integrity of the surgical anastamosis. Methods and Materials: Endosonography with color and pulsed wave Doppler was performed on 20 rectal cancer masses before and after high dose preoperative radiation (XRT). Pre- and post-XRT observations included comparing alterations in tumor size, sonographic echotexture, color Doppler flow, and pulsatility indices. Comparisons were made with pathologic findings in the irradiated specimens and with the incidence of anastomotic failure. Results: Compared to pre-XRT observations, irradiated rectal cancers decreased in size and became either mixed in echogenicity with less apparent color Doppler flow (16 of 20) or unchanged in color Doppler flow and echotexture (4 of 20). Those with less flow (16 of 20) were imaged later (mean = 90.2 ± 12.1 days) than those without change in color Doppler flow (mean = 21.7 ± 2.7 days). Pathologically, the group of four without change in color Doppler signal had features of acute inflammation which were not observed in 16 of 20 imaged later. Based on pulsatility index measurements, both high and low resistance vessels were detected and confirmed by immunohistochemical staining, and features of postradiation obliterative vasculitis were observed. Only one primary anastomosis in 14 patients with decreased flow failed. Conclusions: The sonographic and color Doppler flow imaging alterations observed within irradiated rectal cancer correlated with changes of postradiation obliterative vasculitis. The apparent diminished local blood flow within high and low resistance vessels post-XRT did not result in an increased incidence of anastomotic failures

  11. Science Applications of a Multispectral Microscopic Imager for the Astrobiological Exploration of Mars

    Science.gov (United States)

    Farmer, Jack D.; Sellar, R. Glenn; Swayze, Gregg A.; Blaney, Diana L.

    2014-01-01

    Abstract Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars. Key Words: Mars—Microscopic imager—Multispectral imaging

  12. Science applications of a multispectral microscopic imager for the astrobiological exploration of Mars

    Science.gov (United States)

    Nunez, Jorge; Farmer, Jack; Sellar, R. Glenn; Swayze, Gregg A.; Blaney, Diana L.

    2014-01-01

    Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars.

  13. Phylloedes tumor of breast: findings at mammography, sonography and color Doppler imaging

    International Nuclear Information System (INIS)

    Park, Kun Choon; Ahn, Sei Hyun; Kim, Young Hwan; Choi, Hye Yong; Baek, Seung Yon; Yoon, Jeong Hyun

    1994-01-01

    The phylloides tumor of the breast is rare. the purposes of this study were to find the characteristic findings at mammography, sonography, and color Doppler imaging and to evaluate the usefulness of color Doppler study as an additional modality in the diagnosis of phylloides tumor and differentiation between benign and malignant varieties. Eight cases, who were pathologically proven as pylloides tumors, were retrospectively studied. The findings at histologic examination suggested benign in five, malignantin two, and borderline in one. We analyzed the mammograms of all eight patients and sonogram and color Doppler images of four patients. Phylloides tumors were seen as dense masses with lobulated margins in mammograms. On sonography, they showed relatively well-defined masses with in homogenous internal echo pattern and central echogenic areas. They were characterized by the presence of arterial and venous flows in the center and periphery of the lesion on color Doppler imaging and spectral analysis. We conclude that mammographic, sonographic and even color Doppler findings are not predictive of benign or malignant nature of the phylloides tumor. However, mammography and sonography with color Doppler interrogation are helpful in the diagnosis of phylloides tumor

  14. #TheDress: Categorical perception of an ambiguous color image.

    Science.gov (United States)

    Lafer-Sousa, Rosa; Conway, Bevil R

    2017-10-01

    We present a full analysis of data from our preliminary report (Lafer-Sousa, Hermann, & Conway, 2015) and test whether #TheDress image is multistable. A multistable image must give rise to more than one mutually exclusive percept, typically within single individuals. Clustering algorithms of color-matching data showed that the dress was seen categorically, as white/gold (W/G) or blue/black (B/K), with a blue/brown transition state. Multinomial regression predicted categorical labels. Consistent with our prior hypothesis, W/G observers inferred a cool illuminant, whereas B/K observers inferred a warm illuminant; moreover, subjects could use skin color alone to infer the illuminant. The data provide some, albeit weak, support for our hypothesis that day larks see the dress as W/G and night owls see it as B/K. About half of observers who were previously familiar with the image reported switching categories at least once. Switching probability increased with professional art experience. Priming with an image that disambiguated the dress as B/K biased reports toward B/K (priming with W/G had negligible impact); furthermore, knowledge of the dress's true colors and any prior exposure to the image shifted the population toward B/K. These results show that some people have switched their perception of the dress. Finally, consistent with a role of attention and local image statistics in determining how multistable images are seen, we found that observers tended to discount as achromatic the dress component that they did not attend to: B/K reporters focused on a blue region, whereas W/G reporters focused on a golden region.

  15. #TheDress: Categorical perception of an ambiguous color image

    Science.gov (United States)

    Lafer-Sousa, Rosa; Conway, Bevil R.

    2017-01-01

    We present a full analysis of data from our preliminary report (Lafer-Sousa, Hermann, & Conway, 2015) and test whether #TheDress image is multistable. A multistable image must give rise to more than one mutually exclusive percept, typically within single individuals. Clustering algorithms of color-matching data showed that the dress was seen categorically, as white/gold (W/G) or blue/black (B/K), with a blue/brown transition state. Multinomial regression predicted categorical labels. Consistent with our prior hypothesis, W/G observers inferred a cool illuminant, whereas B/K observers inferred a warm illuminant; moreover, subjects could use skin color alone to infer the illuminant. The data provide some, albeit weak, support for our hypothesis that day larks see the dress as W/G and night owls see it as B/K. About half of observers who were previously familiar with the image reported switching categories at least once. Switching probability increased with professional art experience. Priming with an image that disambiguated the dress as B/K biased reports toward B/K (priming with W/G had negligible impact); furthermore, knowledge of the dress's true colors and any prior exposure to the image shifted the population toward B/K. These results show that some people have switched their perception of the dress. Finally, consistent with a role of attention and local image statistics in determining how multistable images are seen, we found that observers tended to discount as achromatic the dress component that they did not attend to: B/K reporters focused on a blue region, whereas W/G reporters focused on a golden region. PMID:29090319

  16. Color evaluation of computer-generated color rainbow holography

    International Nuclear Information System (INIS)

    Shi, Yile; Wang, Hui; Wu, Qiong

    2013-01-01

    A color evaluation approach for computer-generated color rainbow holography (CGCRH) is presented. Firstly, the relationship between color quantities of a computer display and a color computer-generated holography (CCGH) colorimetric system is discussed based on color matching theory. An isochromatic transfer relationship of color quantity and amplitude of object light field is proposed. Secondly, the color reproduction mechanism and factors leading to the color difference between the color object and the holographic image that is reconstructed by CGCRH are analyzed in detail. A quantitative color calculation method for the holographic image reconstructed by CGCRH is given. Finally, general color samples are selected as numerical calculation test targets and the color differences between holographic images and test targets are calculated based on our proposed method. (paper)

  17. A Fast, Background-Independent Retrieval Strategy for Color Image Databases

    National Research Council Canada - National Science Library

    Das, M; Draper, B. A; Lim, W. J; Manmatha, R; Riseman, E. M

    1996-01-01

    We describe an interactive, multi-phase color-based image retrieval system which is capable of identifying query objects specified by the user in an image in the presence of significant, interfering backgrounds...

  18. Mars Orbiter Camera Views the 'Face on Mars' - Best View from Viking

    Science.gov (United States)

    1998-01-01

    Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long.This Viking Orbiter image is one of the best Viking pictures of the area Cydonia where the 'Face' is located. Marked on the image are the 'footprint' of the high resolution (narrow angle) Mars Orbiter Camera image and the area seen in enlarged views (dashed box). See PIA01440-1442 for these images in raw and processed form.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  19. Use of image analysis to assess color response on plants caused by herbicide application

    DEFF Research Database (Denmark)

    Asif, Ali; Streibig, Jens Carl; Duus, Joachim

    2013-01-01

    by herbicides. The range of color components of green and nongreen parts of the plants and soil in Hue, Saturation, and Brightness (HSB) color space were used for segmentation. The canopy color changes of barley, winter wheat, red fescue, and brome fescue caused by doses of a glyphosate and diflufenican mixture...... for the green and nongreen parts of the plants and soil were different. The relative potencies were not significantly different from one, indicating that visual and image analysis estimations were about the same. The comparison results suggest that image analysis can be used to assess color changes of plants......In herbicide-selectivity experiments, response can be measured by visual inspection, stand counts, plant mortality, and biomass. Some response types are relative to nontreated control. We developed a nondestructive method by analyzing digital color images to quantify color changes in leaves caused...

  20. One-Shot Color Astronomical Imaging In Less Time, For Less Money!

    CERN Document Server

    Kennedy, L A

    2012-01-01

    Anyone who has seen recent pictures of the many wondrous objects in space has surely been amazed by the stunning color images. Trying to capture images like these through your own telescope has always seemed too time-consuming, expensive, and complicated. However, with improvements in affordable, easy-to-use CCD imaging technology, you can now capture amazing images yourself. With today's improved "one-shot" color imagers, high-quality images can be taken in a fraction of the time and at a fraction of the cost, right from your own backyard. This book will show you how to harness the power of today's computerized telescopes and entry-level imagers to capture spectacular images that you can share with family and friends. It covers such topics as - evaluating your existing equipment, choosing the right imager, finding targets to image, telescope alignment, focusing and framing the image, exposure times, aligning and stacking multiple frames, image calibration, and enhancement techniques! - how to expand the numb...

  1. Security of Color Image Data Designed by Public-Key Cryptosystem Associated with 2D-DWT

    Science.gov (United States)

    Mishra, D. C.; Sharma, R. K.; Kumar, Manish; Kumar, Kuldeep

    2014-08-01

    In present times the security of image data is a major issue. So, we have proposed a novel technique for security of color image data by public-key cryptosystem or asymmetric cryptosystem. In this technique, we have developed security of color image data using RSA (Rivest-Shamir-Adleman) cryptosystem with two-dimensional discrete wavelet transform (2D-DWT). Earlier proposed schemes for security of color images designed on the basis of keys, but this approach provides security of color images with the help of keys and correct arrangement of RSA parameters. If the attacker knows about exact keys, but has no information of exact arrangement of RSA parameters, then the original information cannot be recovered from the encrypted data. Computer simulation based on standard example is critically examining the behavior of the proposed technique. Security analysis and a detailed comparison between earlier developed schemes for security of color images and proposed technique are also mentioned for the robustness of the cryptosystem.

  2. Cryptanalysis and Improvement of the Robust and Blind Watermarking Scheme for Dual Color Image

    Directory of Open Access Journals (Sweden)

    Hai Nan

    2015-01-01

    Full Text Available With more color images being widely used on the Internet, the research on embedding color watermark image into color host image has been receiving more attention. Recently, Su et al. have proposed a robust and blind watermarking scheme for dual color image, in which the main innovation is the using of two-level DCT. However, it has been demonstrated in this paper that the original scheme in Su’s study is not secure and can be attacked by our proposed method. In addition, some errors in the original scheme have been pointed out. Also, an improvement measure is presented to enhance the security of the original watermarking scheme. The proposed method has been confirmed by both theoretical analysis and experimental results.

  3. Building Virtual Mars

    Science.gov (United States)

    Abercrombie, S. P.; Menzies, A.; Goddard, C.

    2017-12-01

    Virtual and augmented reality enable scientists to visualize environments that are very difficult, or even impossible to visit, such as the surface of Mars. A useful immersive visualization begins with a high quality reconstruction of the environment under study. This presentation will discuss a photogrammetry pipeline developed at the Jet Propulsion Laboratory to reconstruct 3D models of the surface of Mars using stereo images sent back to Earth by the Curiosity Mars rover. The resulting models are used to support a virtual reality tool (OnSight) that allows scientists and engineers to visualize the surface of Mars as if they were standing on the red planet. Images of Mars present challenges to existing scene reconstruction solutions. Surface images of Mars are sparse with minimal overlap, and are often taken from extremely different viewpoints. In addition, the specialized cameras used by Mars rovers are significantly different than consumer cameras, and GPS localization data is not available on Mars. This presentation will discuss scene reconstruction with an emphasis on coping with limited input data, and on creating models suitable for rendering in virtual reality at high frame rate.

  4. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    Science.gov (United States)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  5. Microscope on Mars

    Science.gov (United States)

    2004-01-01

    This image taken at Meridiani Planum, Mars by the panoramic camera on the Mars Exploration Rover Opportunity shows the rover's microscopic imager (circular device in center), located on its instrument deployment device, or 'arm.' The image was acquired on the ninth martian day or sol of the rover's mission.

  6. Generating color terrain images in an emergency response system

    International Nuclear Information System (INIS)

    Belles, R.D.

    1985-08-01

    The Atmospheric Release Advisory Capability (ARAC) provides real-time assessments of the consequences resulting from an atmospheric release of radioactive material. In support of this operation, a system has been created which integrates numerical models, data acquisition systems, data analysis techniques, and professional staff. Of particular importance is the rapid generation of graphical images of the terrain surface in the vicinity of the accident site. A terrain data base and an associated acquisition system have been developed that provide the required terrain data. This data is then used as input to a collection of graphics programs which create and display realistic color images of the terrain. The graphics system currently has the capability of generating color shaded relief images from both overhead and perspective viewpoints within minutes. These images serve to quickly familiarize ARAC assessors with the terrain near the release location, and thus permit them to make better informed decisions in modeling the behavior of the released material. 7 refs., 8 figs

  7. Iterative metal artefact reduction (MAR) in postsurgical chest CT: comparison of three iMAR-algorithms.

    Science.gov (United States)

    Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph

    2017-11-01

    The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.

  8. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques

    Science.gov (United States)

    Ivanov, Anton; Oberst, Jürgen; Yershov, Vladimir; Muller, Jan-Peter; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004, the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25 m nadir images) with 87% coverage with more than 65% useful for stereo mapping. NASA began imaging the surface of Mars, initially from flybys in the 1960s and then from the first orbiter with image resolution less than 100 m in the late 1970s from Viking Orbiter. The most recent orbiter, NASA MRO, has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈20 cm) and ≈5% from CTX (≈6 m) in stereo. Within the iMars project (http://i-Mars.eu), a fully automated large-scale processing (“Big Data”) solution is being developed to generate the best possible multi-resolution DTM of Mars. In addition, HRSC OrthoRectified Images (ORI) will be used as a georeference basis so that all higher resolution ORIs will be co-registered to the HRSC DTMs (50-100m grid) products generated at DLR and, from CTX (6-20 m grid) and HiRISE (1-3 m grids) on a large-scale Linux cluster based at MSSL. The HRSC products will be employed to provide a geographic reference for all current, future and historical NASA products using automated co-registration based on feature points and initial results will be shown here. In 2015, many of the entire NASA and ESA orbital images will be co-registered and the updated georeferencing

  9. Imaging the Extended Hot Hydrogen Exosphere at Mars to Determine the Water Escape Rate

    Science.gov (United States)

    Bhattacharyya, Dolon

    2017-08-01

    ACS SBC imaging of the extended hydrogen exosphere of Mars is proposed to identify the hot hydrogen population present in the exosphere of Mars. Determining the characteristics of this population and the underlying processes responsible for its production are critical towards constraining the escape flux of H from Mars, which in turn is directly related to the water escape history of Mars. Since the hot atoms appear mainly at high altitudes, these observations will be scheduled when Mars is far from Earth allowing us to image the hot hydrogen atoms at high altitudes where they dominate the population. The altitude coverage by HST will extend beyond 30,000 km or 8.8 Martian radii in this case, which makes it perfect for this study as orbiting spacecraft remain at low altitudes (MAVEN apoapse is 6000 km) and cannot separate hot atoms from the thermal population at those altitudes. The observations will also be carried out when Mars is near aphelion, the atmospheric temperature is low, and the thermal population has a small scale height, allowing the clear characterization of the hot hydrogen layer. Another advantage of conducting this study in this cycle is that the solar activity is near its minimum, allowing us to discriminate between changes in the hot hydrogen population from processes taking place within the atmosphere of Mars and changes due to external drivers like the solar wind, producing this non-thermal population. This proposal is part of the HST UV initiative.

  10. Visibility enhancement of color images using Type-II fuzzy membership function

    Science.gov (United States)

    Singh, Harmandeep; Khehra, Baljit Singh

    2018-04-01

    Images taken in poor environmental conditions decrease the visibility and hidden information of digital images. Therefore, image enhancement techniques are necessary for improving the significant details of these images. An extensive review has shown that histogram-based enhancement techniques greatly suffer from over/under enhancement issues. Fuzzy-based enhancement techniques suffer from over/under saturated pixels problems. In this paper, a novel Type-II fuzzy-based image enhancement technique has been proposed for improving the visibility of images. The Type-II fuzzy logic can automatically extract the local atmospheric light and roughly eliminate the atmospheric veil in local detail enhancement. The proposed technique has been evaluated on 10 well-known weather degraded color images and is also compared with four well-known existing image enhancement techniques. The experimental results reveal that the proposed technique outperforms others regarding visible edge ratio, color gradients and number of saturated pixels.

  11. Luminance contours can gate afterimage colors and "real" colors.

    Science.gov (United States)

    Anstis, Stuart; Vergeer, Mark; Van Lier, Rob

    2012-09-06

    It has long been known that colored images may elicit afterimages in complementary colors. We have already shown (Van Lier, Vergeer, & Anstis, 2009) that one and the same adapting image may result in different afterimage colors, depending on the test contours presented after the colored image. The color of the afterimage depends on two adapting colors, those both inside and outside the test. Here, we further explore this phenomenon and show that the color-contour interactions shown for afterimage colors also occur for "real" colors. We argue that similar mechanisms apply for both types of stimulation.

  12. Modeling human color categorization: Color discrimination and color memory

    NARCIS (Netherlands)

    Heskes, T.; van den Broek, Egon; Lucas, P.; Hendriks, Maria A.; Vuurpijl, L.G.; Puts, M.J.H.; Wiegerinck, W.

    2003-01-01

    Color matching in Content-Based Image Retrieval is done using a color space and measuring distances between colors. Such an approach yields non-intuitive results for the user. We introduce color categories (or focal colors), determine that they are valid, and use them in two experiments. The

  13. Luminance contours can gate afterimage colors and 'real' colors

    NARCIS (Netherlands)

    Anstis, S.; Vergeer, M.L.T.; Lier, R.J. van

    2012-01-01

    It has long been known that colored images may elicit afterimages in complementary colors. We have already shown (Van Lier, Vergeer, & Anstis, 2009) that one and the same adapting image may result in different afterimage colors, depending on the test contours presented after the colored image. The

  14. Quantum color image watermarking based on Arnold transformation and LSB steganography

    Science.gov (United States)

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Luo, Gaofeng

    In this paper, a quantum color image watermarking scheme is proposed through twice-scrambling of Arnold transformations and steganography of least significant bit (LSB). Both carrier image and watermark images are represented by the novel quantum representation of color digital images model (NCQI). The image sizes for carrier and watermark are assumed to be 2n×2n and 2n‑1×2n‑1, respectively. At first, the watermark is scrambled into a disordered form through image preprocessing technique of exchanging the image pixel position and altering the color information based on Arnold transforms, simultaneously. Then, the scrambled watermark with 2n‑1×2n‑1 image size and 24-qubit grayscale is further expanded to an image with size 2n×2n and 6-qubit grayscale using the nearest-neighbor interpolation method. Finally, the scrambled and expanded watermark is embedded into the carrier by steganography of LSB scheme, and a key image with 2n×2n size and 3-qubit information is generated at the meantime, which only can use the key image to retrieve the original watermark. The extraction of watermark is the reverse process of embedding, which is achieved by applying a sequence of operations in the reverse order. Simulation-based experimental results involving different carrier and watermark images (i.e. conventional or non-quantum) are simulated based on the classical computer’s MATLAB 2014b software, which illustrates that the present method has a good performance in terms of three items: visual quality, robustness and steganography capacity.

  15. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    Energy Technology Data Exchange (ETDEWEB)

    Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582, Japan and Department of Radiology, Yamaguchi University Hospital, 1-1-1 Minamikogushi, Ube, Yamaguchi 755-8505 (Japan); Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582 (Japan)

    2015-08-15

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signals for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors

  16. 100 New Impact Crater Sites Found on Mars

    Science.gov (United States)

    Kennedy, M. R.; Malin, M. C.

    2009-12-01

    Recent observations constrain the formation of 100 new impact sites on Mars over the past decade; 19 of these were found using the Mars Global Surveyor Mars Orbiter Camera (MOC), and the other 81 have been identified since 2006 using the Mars Reconnaissance Orbiter Context Camera (CTX). Every 6 meter/pixel CTX image is examined upon receipt and, where they overlap images of 0.3-240 m/pixel scale acquired by the same or other Mars-orbiting spacecraft, we look for features that may have changed. New impact sites are initially identified by the presence of a new dark spot or cluster of dark spots in a CTX image. Such spots may be new impact craters, or result from the effect of impact blasts on the dusty surface. In some (generally rare) cases, the crater is sufficiently large to be resolved in the CTX image. In most cases, however, the crater(s) cannot be seen. These are tentatively designated as “candidate” new impact sites, and the CTX team then creates an opportunity for the MRO spacecraft to point its cameras off-nadir and requests that the High Resolution Imaging Science Experiment (HiRISE) team obtain an image of ~0.3 m/pixel to confirm whether a crater or crater cluster is present. It is clear even from cursory examination that the CTX observations are areographically biased to dusty, higher albedo areas on Mars. All but 3 of the 100 new impact sites occur on surfaces with Lambert albedo values in excess of 23.5%. Our initial study of MOC images greatly benefited from the initial global observations made in one month in 1999, creating a baseline date from which we could start counting new craters. The global coverage by MRO Mars Color Imager is more than a factor of 4 poorer in resolution than the MOC Wide Angle camera and does not offer the opportunity for global analysis. Instead, we must rely on partial global coverage and global coverage that has taken years to accumulate; thus we can only treat impact rates statistically. We subdivide the total data

  17. A Robust Color Image Watermarking Scheme Using Entropy and QR Decomposition

    Directory of Open Access Journals (Sweden)

    L. Laur

    2015-12-01

    Full Text Available Internet has affected our everyday life drastically. Expansive volumes of information are exchanged over the Internet consistently which causes numerous security concerns. Issues like content identification, document and image security, audience measurement, ownership, copyrights and others can be settled by using digital watermarking. In this work, robust and imperceptible non-blind color image watermarking algorithm is proposed, which benefit from the fact that watermark can be hidden in different color channel which results into further robustness of the proposed technique to attacks. Given method uses some algorithms such as entropy, discrete wavelet transform, Chirp z-transform, orthogonal-triangular decomposition and Singular value decomposition in order to embed the watermark in a color image. Many experiments are performed using well-known signal processing attacks such as histogram equalization, adding noise and compression. Experimental results show that proposed scheme is imperceptible and robust against common signal processing attacks.

  18. Acquisition and visualization techniques for narrow spectral color imaging.

    Science.gov (United States)

    Neumann, László; García, Rafael; Basa, János; Hegedüs, Ramón

    2013-06-01

    This paper introduces a new approach in narrow-band imaging (NBI). Existing NBI techniques generate images by selecting discrete bands over the full visible spectrum or an even wider spectral range. In contrast, here we perform the sampling with filters covering a tight spectral window. This image acquisition method, named narrow spectral imaging, can be particularly useful when optical information is only available within a narrow spectral window, such as in the case of deep-water transmittance, which constitutes the principal motivation of this work. In this study we demonstrate the potential of the proposed photographic technique on nonunderwater scenes recorded under controlled conditions. To this end three multilayer narrow bandpass filters were employed, which transmit at 440, 456, and 470 nm bluish wavelengths, respectively. Since the differences among the images captured in such a narrow spectral window can be extremely small, both image acquisition and visualization require a novel approach. First, high-bit-depth images were acquired with multilayer narrow-band filters either placed in front of the illumination or mounted on the camera lens. Second, a color-mapping method is proposed, using which the input data can be transformed onto the entire display color gamut with a continuous and perceptually nearly uniform mapping, while ensuring optimally high information content for human perception.

  19. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques: A Mid-term Report

    Science.gov (United States)

    Muller, J.-P.; Yershov, V.; Sidiropoulos, P.; Gwinner, K.; Willner, K.; Fanara, L.; Waelisch, M.; van Gasselt, S.; Walter, S.; Ivanov, A.; Cantini, F.; Morley, J. G.; Sprinks, J.; Giordano, M.; Wardlaw, J.; Kim, J.-R.; Chen, W.-T.; Houghton, R.; Bamford, S.

    2015-10-01

    Understanding the role of different solid surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10s of cms) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the potential to be able to overlay different epochs back to the mid-1970s. Within iMars, a processing system has been developed to generate 3D Digital Terrain Models (DTMs) and corresponding OrthoRectified Images (ORIs) fully automatically from NASA MRO HiRISE and CTX stereo-pairs which are coregistered to corresponding HRSC ORI/DTMs. In parallel, iMars has developed a fully automated processing chain for co-registering level-1 (EDR) images from all previous NASA orbital missions to these HRSC ORIs and in the case of HiRISE these are further co-registered to previously co-registered CTX-to-HRSC ORIs. Examples will be shown of these multi-resolution ORIs and the application of different data mining algorithms to change detection using these co-registered images. iMars has recently launched a citizen science experiment to evaluate best practices for future citizen scientist validation of such data mining processed results. An example of the iMars website will be shown along with an embedded Version 0 prototype of a webGIS based on OGC standards.

  20. The utilization of human color categorization for content-based image retrieval

    NARCIS (Netherlands)

    van den Broek, Egon; Rogowitz, Bernice E.; Kisters, Peter M.F.; Pappas, Thrasyvoulos N.; Vuurpijl, Louis G.

    2004-01-01

    We present the concept of intelligent Content-Based Image Retrieval (iCBIR), which incorporates knowledge concerning human cognition in system development. The present research focuses on the utilization of color categories (or focal colors) for CBIR purposes, in particularly considered to be useful

  1. Multi-Frequency Encoding for Rapid Color Flow and Quadroplex Imaging

    DEFF Research Database (Denmark)

    Oddershede, Niels; Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    Ultrasonic color flow maps are made by estimating the velocities line by line over the region of interest. For each velocity estimate, multiple repetitions are needed. This sets a limit on the frame rate, which becomes increasingly severe when imaging deeper lying structures or when simultaneously...... acquiring spectrogram data for triplex imaging. This paper proposes a method for decreasing the data acquisition time by simultaneously sampling multiple lines at different spatial positions for the color flow map using narrow band signals with disjoint spectral support. The signals are separated...... in the receiver by filters matched to the emitted waveforms and the autocorrelation estimator is applied. Alternatively, one spectral band can be used for creating a color flow map, while data for a number of spectrograms are acquired simultaneously. Using three disjoint spectral bands, this will result...

  2. Plane wave fast color flow mode imaging

    DEFF Research Database (Denmark)

    Bolic, Ibrahim; Udesen, Jesper; Gran, Fredrik

    2006-01-01

    A new Plane wave fast color flow imaging method (PWM) has been investigated, and performance evaluation of the PWM based on experimental measurements has been made. The results show that it is possible to obtain a CFM image using only 8 echo-pulse emissions for beam to flow angles between 45...... degrees and 75 degrees. Compared to the conventional ultrasound imaging the frame rate is similar to 30 - 60 times higher. The bias, B-est of the velocity profile estimate, based on 8 pulse-echo emissions, is between 3.3% and 6.1% for beam to flow angles between 45 degrees and 75 degrees, and the standard...

  3. Modeling human color categorization: Color discrimination and color memory

    OpenAIRE

    Heskes, T.; van den Broek, Egon; Lucas, P.; Hendriks, Maria A.; Vuurpijl, L.G.; Puts, M.J.H.; Wiegerinck, W.

    2003-01-01

    Color matching in Content-Based Image Retrieval is done using a color space and measuring distances between colors. Such an approach yields non-intuitive results for the user. We introduce color categories (or focal colors), determine that they are valid, and use them in two experiments. The experiments conducted prove the difference between color categorization by the cognitive processes color discrimination and color memory. In addition, they yield a Color Look-Up Table, which can improve c...

  4. A simple methodology for obtaining X-ray color images in scanning electron microscopy

    International Nuclear Information System (INIS)

    Veiga, M.M. da; Pietroluongo, L.R.V.

    1985-01-01

    A simple methodology for obtaining at least 3 elements X-ray images in only one photography is described. The fluorescent X-ray image is obtained from scanning electron microscopy with energy dispersion analysis system. The change of detector analytic channels, color cellophane foils and color films are used sequentially. (M.C.K.) [pt

  5. Hyperspectral imaging of cuttlefish camouflage indicates good color match in the eyes of fish predators.

    Science.gov (United States)

    Chiao, Chuan-Chin; Wickiser, J Kenneth; Allen, Justine J; Genter, Brock; Hanlon, Roger T

    2011-05-31

    Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision.

  6. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    Science.gov (United States)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  7. Diagnostic agreement between panoramic radiographs and color doppler images of carotid atheroma

    Directory of Open Access Journals (Sweden)

    Claudia Maria Romano-Sousa

    2009-02-01

    Full Text Available The aim of this study was to investigate the agreement between diagnoses of calcified atheroma seen on panoramic radiographs and color Doppler images. Our interest stems from the fact that panoramic images can show the presence of atheroma regardless of the level of obstruction detected by color Doppler images. Panoramic and color Doppler images of 16 patients obtained from the archives of the Health Department of the city of Valença, RJ, Brazil, were analyzed in this study. Both sides of each patient were observed on the images, with a total of 32 analyzed cervical regions. The level of agreement between diagnoses was analyzed using the Kappa statistics. There was a high level of agreement, with a Kappa value of 0.78. In conclusion, panoramic radiographs can help detecting calcifications in the cervical region of patients susceptible to vascular diseases predisposing to myocardial infarction and cerebrovascular accidents. If properly trained and informed, dentists can refer their patients to a physician for a cardiovascular evaluation in order to receive proper and timely medical treatment.

  8. PROCEDURES FOR ACCURATE PRODUCTION OF COLOR IMAGES FROM SATELLITE OR AIRCRAFT MULTISPECTRAL DIGITAL DATA.

    Science.gov (United States)

    Duval, Joseph S.

    1985-01-01

    Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.

  9. Improving scale invariant feature transform with local color contrastive descriptor for image classification

    Science.gov (United States)

    Guo, Sheng; Huang, Weilin; Qiao, Yu

    2017-01-01

    Image representation and classification are two fundamental tasks toward version understanding. Shape and texture provide two key features for visual representation and have been widely exploited in a number of successful local descriptors, e.g., scale invariant feature transform (SIFT), local binary pattern descriptor, and histogram of oriented gradient. Unlike these gradient-based descriptors, this paper presents a simple yet efficient local descriptor, named local color contrastive descriptor (LCCD), which captures the contrastive aspects among local regions or color channels for image representation. LCCD is partly inspired by the neural science facts that color contrast plays important roles in visual perception and there exist strong linkages between color and shape. We leverage f-divergence as a robust measure to estimate the contrastive features between different spatial locations and multiple channels. Our descriptor enriches local image representation with both color and contrast information. Due to that LCCD does not explore any gradient information, individual LCCD does not yield strong performance. But we verified experimentally that LCCD can compensate strongly SIFT. Extensive experimental results on image classification show that our descriptor improves the performance of SIFT substantially by combination on three challenging benchmarks, including MIT Indoor-67 database, SUN397, and PASCAL VOC 2007.

  10. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques

    Science.gov (United States)

    Ivanov, Anton; Muller, Jan-Peter; Tao, Yu; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis; Fanara, Lida; Waenlish, Marita; Walter, Sebastian; Steinkert, Ralf; Schreiner, Bjorn; Cantini, Federico; Wardlaw, Jessica; Sprinks, James; Giordano, Michele; Marsh, Stuart

    2016-07-01

    Understanding planetary atmosphere-surface and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back in time to the mid 1970s, to examine time-varying changes, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, UCL have developed a fully automated multi-resolution DTM processing chain, called the Co-registration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP), which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed and is being applied to level-1 EDR images taken by the 4 NASA orbital cameras since 1976 using the HRSC map products (both mosaics and orbital strips) as a map-base. The project has also included Mars Radar profiles from Mars Express and Mars Reconnaissance Orbiter missions. A webGIS has been developed for displaying this time sequence of imagery and a demonstration will be shown applied to one of the map-sheets. Automated quality control techniques are applied to screen for suitable images and these are extended to detect temporal changes in features on the surface such as mass movements, streaks, spiders, impact craters, CO2 geysers and Swiss Cheese terrain. These data mining techniques are then being employed within a citizen science project within the Zooniverse family

  11. Computer-Generated Abstract Paintings Oriented by the Color Composition of Images

    Directory of Open Access Journals (Sweden)

    Mao Li

    2017-06-01

    Full Text Available Designers and artists often require reference images at authoring time. The emergence of computer technology has provided new conditions and possibilities for artistic creation and research. It has also expanded the forms of artistic expression and attracted many artists, designers and computer experts to explore different artistic directions and collaborate with one another. In this paper, we present an efficient k-means-based method to segment the colors of an original picture to analyze the composition ratio of the color information and calculate individual color areas that are associated with their sizes. This information is transformed into regular geometries to reconstruct the colors of the picture to generate abstract images. Furthermore, we designed an application system using the proposed method and generated many works; some artists and designers have used it as an auxiliary tool for art and design creation. The experimental results of datasets demonstrate the effectiveness of our method and can give us inspiration for our work.

  12. Multi-Frequency Encoding for Fast Color Flow or Quadroplex Imaging

    DEFF Research Database (Denmark)

    Oddershede, Niels; Gran, Fredrik; Jensen, Jørgen Arendt

    2008-01-01

    Ultrasonic color flow maps are made by estimating the velocities line by line over the region of interest. For each velocity estimate, multiple repetitions are needed. This sets a limit on the frame rate, which becomes increasingly severe when imaging deeper lying structures or when simultaneously...... acquiring spectrogram data for triplex imaging. This paper proposes a method for decreasing the data acquisition time by simultaneously sampling multiple lines for color flow maps, using narrow band signals with approximately disjoint spectral support. The signals are separated in the receiver by filters....... A mean standard deviation across the flow profile of 3.1, 2.5, and 2.1% of the peak velocity was found for bands at 5 MHz, 7 MHz, and 9 MHz, respectively. Alternatively, the method can be used for simultaneously sampling data for a color flow map and for multiple spectrograms using different spectral...

  13. Color segmentation in the HSI color space using the K-means algorithm

    Science.gov (United States)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue

  14. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  15. Adaptive Secret Sharing for Color Images

    Directory of Open Access Journals (Sweden)

    Jia-Hong Li

    2011-10-01

    Full Text Available A secret sharing model can secure a secret over multiple noise-like shadows and remain recoverable despite multiple shadow failures. Even if some of the shadows are compromised, the secret will not be revealed as long as the number of the compromised shadows is smaller than a pre-determined threshold. Moreover, there are some necessary details of concerns: the malicious tampering on shadows must be detectable; the shadows must be concealed in a camouflage image with adequate quality to reduce suspicion and possible attack; color image properties must be considered. In addition to these concerns, in this paper, an adaptable mechanism is further designed to balance the hiding quantity and the quality of camouflage images depending on different applications.This is an important and interesting aspect that has never been discussed in previous research.

  16. Self-referenced axial chromatic dispersion measurement in multiphoton microscopy through 2-color THG imaging.

    Science.gov (United States)

    Du, Yu; Zhuang, Ziwei; He, Jiexing; Liu, Hongji; Qiu, Ping; Wang, Ke

    2018-05-16

    With tunable excitation light, multiphoton microscopy (MPM) is widely used for imaging biological structures at subcellular resolution. Axial chromatic dispersion, present in virtually every transmissive optical system including the multiphoton microscope, leads to focal (and the resultant image) plane separation. Here we demonstrate experimentally a technique to measure the axial chromatic dispersion in a multiphoton microscope, using simultaneous 2-color third-harmonic generation (THG) imaging excited by a 2-color soliton source with tunable wavelength separation. Our technique is self-referenced, eliminating potential measurement error when 1-color tunable excitation light is used which necessitates reciprocating motion of the mechanical translation stage. Using this technique, we demonstrate measured axial chromatic dispersion with 2 different objective lenses in a multiphoton microscope. Further measurement in a biological sample also indicates that this axial chromatic dispersion, in combination with 2-color imaging, may open up opportunity for simultaneous imaging of two different axial planes. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  17. Design Guidelines for a Content-Based Image Retrieval Color-Selection Interface

    NARCIS (Netherlands)

    Eggen, Berry; van den Broek, Egon; van der Veer, Gerrit C.; Kisters, Peter M.F.; Willems, Rob; Vuurpijl, Louis G.

    2004-01-01

    In Content-Based Image Retrieval (CBIR) two query-methods exist: query-by-example and query-by-memory. The user either selects an example image or selects image features retrieved from memory (such as color, texture, spatial attributes, and shape) to define his query. Hitherto, research on CBIR

  18. Community Tools for Cartographic and Photogrammetric Processing of Mars Express HRSC Images

    Science.gov (United States)

    Kirk, R. L.; Howington-Kraus, E.; Edmundson, K.; Redding, B.; Galuszka, D.; Hare, T.; Gwinner, K.

    2017-07-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged  77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was

  19. COMMUNITY TOOLS FOR CARTOGRAPHIC AND PHOTOGRAMMETRIC PROCESSING OF MARS EXPRESS HRSC IMAGES

    Directory of Open Access Journals (Sweden)

    R. L. Kirk

    2017-07-01

    Full Text Available The High Resolution Stereo Camera (HRSC on the Mars Express orbiter (Neukum et al. 2004 is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged ~ 77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016. Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3, which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995 which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007. A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result

  20. Color impact in visual attention deployment considering emotional images

    Science.gov (United States)

    Chamaret, C.

    2012-03-01

    Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.

  1. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    Science.gov (United States)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  2. Superpixel segmentation and pigment identification of colored relics based on visible spectral image

    Science.gov (United States)

    Li, Junfeng; Wan, Xiaoxia

    2018-01-01

    To enrich the contents of digital archive and to guide the copy and restoration of colored relics, non-invasive methods for extraction of painting boundary and identification of pigment composition are proposed in this study based on the visible spectral images of colored relics. Superpixel concept is applied for the first time to the field of oversegmentation of visible spectral images and implemented on the visible spectral images of colored relics to extract their painting boundary. Since different pigments are characterized by their own spectrum and the same kind of pigment has the similar geometric profile in spectrum, an automatic identification method is established by comparing the proximity between the geometric profiles of the unknown spectrum from each superpixel and the pre-known spectrum from a deliberately prepared database. The methods are validated using the visible spectral images of the ancient wall paintings in Mogao Grottoes. By the way, the visible spectral images are captured by a multispectral imaging system consisting of two broadband filters and a RGB camera with high spatial resolution.

  3. Automated color classification of urine dipstick image in urine examination

    Science.gov (United States)

    Rahmat, R. F.; Royananda; Muchtar, M. A.; Taqiuddin, R.; Adnan, S.; Anugrahwaty, R.; Budiarto, R.

    2018-03-01

    Urine examination using urine dipstick has long been used to determine the health status of a person. The economical and convenient use of urine dipstick is one of the reasons urine dipstick is still used to check people health status. The real-life implementation of urine dipstick is done manually, in general, that is by comparing it with the reference color visually. This resulted perception differences in the color reading of the examination results. In this research, authors used a scanner to obtain the urine dipstick color image. The use of scanner can be one of the solutions in reading the result of urine dipstick because the light produced is consistent. A method is required to overcome the problems of urine dipstick color matching and the test reference color that have been conducted manually. The method proposed by authors is Euclidean Distance, Otsu along with RGB color feature extraction method to match the colors on the urine dipstick with the standard reference color of urine examination. The result shows that the proposed approach was able to classify the colors on a urine dipstick with an accuracy of 95.45%. The accuracy of color classification on urine dipstick against the standard reference color is influenced by the level of scanner resolution used, the higher the scanner resolution level, the higher the accuracy.

  4. Image Retrieval based on Integration between Color and Geometric Moment Features

    International Nuclear Information System (INIS)

    Saad, M.H.; Saleh, H.I.; Konbor, H.; Ashour, M.

    2012-01-01

    Content based image retrieval is the retrieval of images based on visual features such as colour, texture and shape. .the Current approaches to CBIR differ in terms of which image features are extracted; recent work deals with combination of distances or scores from different and usually independent representations in an attempt to induce high level semantics from the low level descriptors of the images. content-based image retrieval has many application areas such as, education, commerce, military, searching, commerce, and biomedicine and Web image classification. This paper proposes a new image retrieval system, which uses color and geometric moment feature to form the feature vectors. Bhattacharyya distance and histogram intersection are used to perform feature matching. This framework integrates the color histogram which represents the global feature and geometric moment as local descriptor to enhance the retrieval results. The proposed technique is proper for precisely retrieving images even in deformation cases such as geometric deformations and noise. It is tested on a standard the results shows that a combination of our approach as a local image descriptor with other global descriptors outperforms other approaches.

  5. Automated retinal vessel type classification in color fundus images

    Science.gov (United States)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  6. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image

    Science.gov (United States)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi

    2018-02-01

    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  7. Use of fluorescent proteins and color-coded imaging to visualize cancer cells with different genetic properties.

    Science.gov (United States)

    Hoffman, Robert M

    2016-03-01

    Fluorescent proteins are very bright and available in spectrally-distinct colors, enable the imaging of color-coded cancer cells growing in vivo and therefore the distinction of cancer cells with different genetic properties. Non-invasive and intravital imaging of cancer cells with fluorescent proteins allows the visualization of distinct genetic variants of cancer cells down to the cellular level in vivo. Cancer cells with increased or decreased ability to metastasize can be distinguished in vivo. Gene exchange in vivo which enables low metastatic cancer cells to convert to high metastatic can be color-coded imaged in vivo. Cancer stem-like and non-stem cells can be distinguished in vivo by color-coded imaging. These properties also demonstrate the vast superiority of imaging cancer cells in vivo with fluorescent proteins over photon counting of luciferase-labeled cancer cells.

  8. Comparing orbiter and rover image-based mapping of an ancient sedimentary environment, Aeolis Palus, Gale crater, Mars

    Science.gov (United States)

    Stack, Kathryn M.; Edwards, Christopher; Grotzinger, J. P.; Gupta, S.; Sumner, D.; Edgar, Lauren; Fraeman, A.; Jacob, S.; LeDeit, L.; Lewis, K.W.; Rice, M.S.; Rubin, D.; Calef, F.; Edgett, K.; Williams, R.M.E.; Williford, K.H.

    2016-01-01

    This study provides the first systematic comparison of orbital facies maps with detailed ground-based geology observations from the Mars Science Laboratory (MSL) Curiosity rover to examine the validity of geologic interpretations derived from orbital image data. Orbital facies maps were constructed for the Darwin, Cooperstown, and Kimberley waypoints visited by the Curiosity rover using High Resolution Imaging Science Experiment (HiRISE) images. These maps, which represent the most detailed orbital analysis of these areas to date, were compared with rover image-based geologic maps and stratigraphic columns derived from Curiosity’s Mast Camera (Mastcam) and Mars Hand Lens Imager (MAHLI). Results show that bedrock outcrops can generally be distinguished from unconsolidated surficial deposits in high-resolution orbital images and that orbital facies mapping can be used to recognize geologic contacts between well-exposed bedrock units. However, process-based interpretations derived from orbital image mapping are difficult to infer without known regional context or observable paleogeomorphic indicators, and layer-cake models of stratigraphy derived from orbital maps oversimplify depositional relationships as revealed from a rover perspective. This study also shows that fine-scale orbital image-based mapping of current and future Mars landing sites is essential for optimizing the efficiency and science return of rover surface operations.

  9. Deployment of a Prototype Plant GFP Imager at the Arthur Clarke Mars Greenhouse of the Haughton Mars Project

    Directory of Open Access Journals (Sweden)

    Robert J. Ferl

    2008-04-01

    Full Text Available The use of engineered plants as biosensors has made elegant strides in the past decades, providing keen insights into the health of plants in general and particularly in the nature and cellular location of stress responses. However, most of the analytical procedures involve laboratory examination of the biosensor plants. With the advent of the green fluorescence protein (GFP as a biosensor molecule, it became at least theoretically possible for analyses of gene expression to occur telemetrically, with the gene expression information of the plant delivered to the investigator over large distances simply as properly processed fluorescence images. Spaceflight and other extraterrestrial environments provide unique challenges to plant life, challenges that often require changes at the gene expression level to accommodate adaptation and survival. Having previously deployed transgenic plant biosensors to evaluate responses to orbital spaceflight, we wished to develop the plants and especially the imaging devices required to conduct such experiments robotically, without operator intervention, within extraterrestrial environments. This requires the development of an autonomous and remotely operated plant GFP imaging system and concomitant development of the communications infrastructure to manage dataflow from the imaging device. Here we report the results of deploying a prototype GFP imaging system within the Arthur Clarke Mars Greenhouse (ACMG an autonomously operated greenhouse located within the Haughton Mars Project in the Canadian High Arctic. Results both demonstrate the applicability of the fundamental GFP biosensor technology and highlight the difficulties in collecting and managing telemetric data from challenging deployment environments.

  10. Image analysis of skin color heterogeneity focusing on skin chromophores and the age-related changes in facial skin.

    Science.gov (United States)

    Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji

    2015-05-01

    Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Objectness Supervised Merging Algorithm for Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Haifeng Sima

    2016-01-01

    Full Text Available Ideal color image segmentation needs both low-level cues and high-level semantic features. This paper proposes a two-hierarchy segmentation model based on merging homogeneous superpixels. First, a region growing strategy is designed for producing homogenous and compact superpixels in different partitions. Total variation smoothing features are adopted in the growing procedure for locating real boundaries. Before merging, we define a combined color-texture histogram feature for superpixels description and, meanwhile, a novel objectness feature is proposed to supervise the region merging procedure for reliable segmentation. Both color-texture histograms and objectness are computed to measure regional similarities between region pairs, and the mixed standard deviation of the union features is exploited to make stop criteria for merging process. Experimental results on the popular benchmark dataset demonstrate the better segmentation performance of the proposed model compared to other well-known segmentation algorithms.

  12. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    International Nuclear Information System (INIS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-01-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct

  13. Fast color flow mode imaging using plane wave excitation and temporal encoding

    DEFF Research Database (Denmark)

    Udesen, Jesper; Gran, Fredrik; Jensen, Jørgen Arendt

    2005-01-01

    In conventional ultrasound color flow mode imaging, a large number (~500) of pulses have to be emitted in order to form a complete velocity map. This lowers the frame-rate and temporal resolution. A method for color flow imaging in which a few (~10) pulses have to be emitted to form a complete ve...... deviation of 0.84% and a relative bias of 5.74%. Finally the method is tested on the common carotid artery of a healthy 33-year-old male....

  14. Mass balance of Mars' residual south polar cap from CTX images and other data

    Science.gov (United States)

    Thomas, P. C.; Calvin, W.; Cantor, B.; Haberle, R.; James, P. B.; Lee, S. W.

    2016-04-01

    Erosion of pits in the residual south polar cap (RSPC) of Mars concurrent with deposition and fluctuating cap boundaries raises questions about the mass balance and long term stability of the cap. Determining a mass balance by measurement of a net gain or loss of atmospheric CO2 by direct pressure measurements (Haberle, R.M. et al. [2014]. Secular climate change on Mars: An update using one Mars year of MSL pressure data. American Geophysical Union (Fall). Abstract 3947), although perhaps the most direct method, has so far given ambiguous results. Estimating volume changes from imaging data faces challenges, and has previously been attempted only in isolated areas of the cap. In this study we use 6 m/pixel Context Imager (CTX) data from Mars year 31 to map all the morphologic units of the RSPC, expand the measurement record of pit erosion rates, and use high resolution images to place limits on vertical changes in the surface of the residual cap. We find the mass balance in Mars years 9-31 to be -6 to +4 km3/♂y, or roughly -0.039% to +0.026% of the mean atmospheric CO2 mass/♂y. The indeterminate sign results chiefly from uncertainty in the amounts of deposition or erosion on the upper surfaces of deposits (as opposed to scarp retreat). Erosion and net deposition in this period appear to be controlled by summertime planetary scale dust events, the largest occurring in MY 9, another, smaller one in MY 28. The rates of erosion and the deposition observed since MY 9 appear to be consistent with the types of deposits and erosional behavior found in most of the residual cap. However, small areas (100 ♂y) of depositional and/or erosional conditions different from those occurring in the period since MY 9, although these environmental differences could be subtle.

  15. Color image encryption based on Coupled Nonlinear Chaotic Map

    International Nuclear Information System (INIS)

    Mazloom, Sahar; Eftekhari-Moghadam, Amir Masud

    2009-01-01

    Image encryption is somehow different from text encryption due to some inherent features of image such as bulk data capacity and high correlation among pixels, which are generally difficult to handle by conventional methods. The desirable cryptographic properties of the chaotic maps such as sensitivity to initial conditions and random-like behavior have attracted the attention of cryptographers to develop new encryption algorithms. Therefore, recent researches of image encryption algorithms have been increasingly based on chaotic systems, though the drawbacks of small key space and weak security in one-dimensional chaotic cryptosystems are obvious. This paper proposes a Coupled Nonlinear Chaotic Map, called CNCM, and a novel chaos-based image encryption algorithm to encrypt color images by using CNCM. The chaotic cryptography technique which used in this paper is a symmetric key cryptography with a stream cipher structure. In order to increase the security of the proposed algorithm, 240 bit-long secret key is used to generate the initial conditions and parameters of the chaotic map by making some algebraic transformations to the key. These transformations as well as the nonlinearity and coupling structure of the CNCM have enhanced the cryptosystem security. For getting higher security and higher complexity, the current paper employs the image size and color components to cryptosystem, thereby significantly increasing the resistance to known/chosen-plaintext attacks. The results of several experimental, statistical analysis and key sensitivity tests show that the proposed image encryption scheme provides an efficient and secure way for real-time image encryption and transmission.

  16. Color image segmentation using perceptual spaces through applets ...

    African Journals Online (AJOL)

    Color image segmentation using perceptual spaces through applets for determining and preventing diseases in chili peppers. JL González-Pérez, MC Espino-Gudiño, J Gudiño-Bazaldúa, JL Rojas-Rentería, V Rodríguez-Hernández, VM Castaño ...

  17. Preferred memory color difference between the deuteranomalous and normal color vision

    Science.gov (United States)

    Baek, YeSeul; Kwak, Youngshin; Woo, Sungjoo; Park, Chongwook

    2015-01-01

    The goal of this study is to evaluate the difference of the preferred hues of familiar objects between the color deficient observer and the normal observer. Thirteen test color images were chosen covering fruit colors, natural scene and human faces. It contained red, yellow, green, blue, purple and skin color. Two color deficient observer (deuteranomal) and two normal observers were participated in this experiment. They controlled the YCC hue of the objects in the images to obtain the most preferred and the most natural image. The selected images were analyzed using CIELAB values of each pixel. Data analysis results showed that in the case of naturalness, both groups selected the similar hues for the most of image, while, in the case of preference, the color deficient observer preferred more reddish or more greenish images. Since the deuteranomalous observer has relatively week perception for red and green region, they may prefer more reddish or greenish color. The color difference between natural hue and preferred hue of deuteranomal observer is bigger than those of normal observer.

  18. Rover's Wheel Churns Up Bright Martian Soil (False Color)

    Science.gov (United States)

    2009-01-01

    NASA's Mars Exploration Rover Spirit acquired this mosaic on the mission's 1,202nd Martian day, or sol (May 21, 2007), while investigating the area east of the elevated plateau known as 'Home Plate' in the 'Columbia Hills.' The mosaic shows an area of disturbed soil, nicknamed 'Gertrude Weise' by scientists, made by Spirit's stuck right front wheel. The trench exposed a patch of nearly pure silica, with the composition of opal. It could have come from either a hot-spring environment or an environment called a fumarole, in which acidic, volcanic steam rises through cracks. Either way, its formation involved water, and on Earth, both of these types of settings teem with microbial life. The image is presented here in false color that is used to bring out subtle differences in color.

  19. Tomographic Particle Image Velocimetry using Smartphones and Colored Shadows

    KAUST Repository

    Aguirre-Pablo, Andres A.; Alarfaj, Meshal K.; Li, Erqiang; Hernandez Sanchez, Jose Federico; Thoroddsen, Sigurdur T

    2017-01-01

    We demonstrate the viability of using four low-cost smartphone cameras to perform Tomographic PIV. We use colored shadows to imprint two or three different time-steps on the same image. The back-lighting is accomplished with three sets

  20. Color Image Authentication and Recovery via Adaptive Encoding

    Directory of Open Access Journals (Sweden)

    Chun-Hung Chen

    2014-01-01

    Full Text Available We describe an authentication and recovery scheme for color image protection based on adaptive encoding. The image blocks are categorized based on their contents and different encoding schemes are applied according to their types. Such adaptive encoding results in better image quality and more robust image authentication. The approximations of the luminance and chromatic channels are carefully calculated, and for the purpose of reducing the data size, differential coding is used to encode the channels with variable size according to the characteristic of the block. The recovery data which represents the approximation and the detail of the image is embedded for data protection. The necessary data is well protected by using error correcting coding and duplication. The experimental results demonstrate that our technique is able to identify and localize image tampering, while preserving high quality for both watermarked and recovered images.

  1. A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification

    Directory of Open Access Journals (Sweden)

    Huai Yu

    2016-03-01

    Full Text Available Scene classification plays an important role in understanding high-resolution satellite (HRS remotely sensed imagery. For remotely sensed scenes, both color information and texture information provide the discriminative ability in classification tasks. In recent years, substantial performance gains in HRS image classification have been reported in the literature. One branch of research combines multiple complementary features based on various aspects such as texture, color and structure. Two methods are commonly used to combine these features: early fusion and late fusion. In this paper, we propose combining the two methods under a tree of regions and present a new descriptor to encode color, texture and structure features using a hierarchical structure-Color Binary Partition Tree (CBPT, which we call the CTS descriptor. Specifically, we first build the hierarchical representation of HRS imagery using the CBPT. Then we quantize the texture and color features of dense regions. Next, we analyze and extract the co-occurrence patterns of regions based on the hierarchical structure. Finally, we encode local descriptors to obtain the final CTS descriptor and test its discriminative capability using object categorization and scene classification with HRS images. The proposed descriptor contains the spectral, textural and structural information of the HRS imagery and is also robust to changes in illuminant color, scale, orientation and contrast. The experimental results demonstrate that the proposed CTS descriptor achieves competitive classification results compared with state-of-the-art algorithms.

  2. Definition of Linear Color Models in the RGB Vector Color Space to Detect Red Peaches in Orchard Images Taken under Natural Illumination

    Directory of Open Access Journals (Sweden)

    Jordi Palacín

    2012-06-01

    Full Text Available This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.

  3. Water on Mars: Evidence from MER Mission Results

    Science.gov (United States)

    Landis, Geoffrey A.

    2004-01-01

    The Viking and the Mars Exploration Rover missions observed that the surface of Mars is encrusted by a thinly cemented layer, or "duricrust". Elemental analyzes at five sites on Mars show that these soils have sulfur content and chlorine content consistent with the presence of sulfates and halides as mineral cements. The soil is highly enriched in the salt-forming elements compared with rock. Analysis of the soil cementation indicates some features which may be evidence of liquid water. At both MER sites, duricrust textures revealed by the Microscopic Imager show features including the presence of fine sand-sized grains, some of which may be aggregates of fine silt and clay, surrounded by a pervasive light colored material that is associated with microtubular structures and networks of microfractures. Stereo views of undisturbed duricrust surfaces reveal rugged microrelief between 2-3 mm and minimal loose material. Comparisons of microscopic images of duricrust soils obtain before and after placement of the Mossbauer spectrometer indicate differing degrees of compaction and cementation. Two models of a transient water hypothesis are offered, a "top down" hypothesis that emphasizes the surface deposition of frost, melting and downward migration of liquid water and a "bottom up" alternative that proposes the presence of interstitial ice/brine, with the upward capillary migration of liquid water. The viability of both of these models ultimately hinges on the availability of seasonally transient liquid water for brief periods.

  4. Separation of irradiance and reflectance from observed color images by logarithmical nonlinear diffusion process

    Science.gov (United States)

    Saito, Takahiro; Takahashi, Hiromi; Komatsu, Takashi

    2006-02-01

    The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.

  5. Frost on Mars

    Science.gov (United States)

    2008-01-01

    This image shows bluish-white frost seen on the Martian surface near NASA's Phoenix Mars Lander. The image was taken by the lander's Surface Stereo Imager on the 131st Martian day, or sol, of the mission (Oct. 7, 2008). Frost is expected to continue to appear in images as fall, then winter approach Mars' northern plains. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  6. Evaluation of Color Settings in Aerial Images with the Use of Eye-Tracking User Study

    Science.gov (United States)

    Mirijovsky, J.; Popelka, S.

    2016-06-01

    The main aim of presented paper is to find the most realistic and preferred color settings for four different types of surfaces on the aerial images. This will be achieved through user study with the use of eye-movement recording. Aerial images taken by the unmanned aerial system were used as stimuli. From each image, squared crop area containing one of the studied types of surfaces (asphalt, concrete, water, soil, and grass) was selected. For each type of surface, the real value of reflectance was found with the use of precise spectroradiometer ASD HandHeld 2 which measures the reflectance. The device was used at the same time as aerial images were captured, so lighting conditions and state of vegetation were equal. The spectral resolution of the ASD device is better than 3.0 nm. For defining the RGB values of selected type of surface, the spectral reflectance values recorded by the device were merged into wider groups. Finally, we get three groups corresponding to RGB color system. Captured images were edited with the graphic editor Photoshop CS6. Contrast, clarity, and brightness were edited for all surface types on images. Finally, we get a set of 12 images of the same area with different color settings. These images were put into the grid and used as stimuli for the eye-tracking experiment. Eye-tracking is one of the methods of usability studies and it is considered as relatively objective. Eye-tracker SMI RED 250 with the sampling frequency 250 Hz was used in the study. As respondents, a group of 24 students of Geoinformatics and Geography was used. Their task was to select which image in the grid has the best color settings. The next task was to select which color settings they prefer. Respondents' answers were evaluated and the most realistic and most preferable color settings were found. The advantage of the eye-tracking evaluation was that also the process of the selection of the answers was analyzed. Areas of Interest were marked around each image in the

  7. A color fusion method of infrared and low-light-level images based on visual perception

    Science.gov (United States)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  8. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... Peligros asociados con los lentes de contacto de color Sep. 26, 2013 It started as an impulsive ... after your vision… The Mystery of the Ghostly White Ring MAR 30, 2018 By Dan T. Gudgel ...

  9. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  10. Mars Orbiter Camera Views the 'Face on Mars' - Comparison with Viking

    Science.gov (United States)

    1998-01-01

    Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long.In this comparison, the best Viking image has been enlarged to 3.3 times its original resolution, and the MOC image has been decreased by a similar 3.3 times, creating images of roughly the same size. In addition, the MOC images have been geometrically transformed to a more overhead projection (different from the mercator map projection of PIA01440 & 1441) for ease of comparison with the Viking image. The left image is a portion of Viking Orbiter 1 frame 070A13, the middle image is a portion of MOC frame shown normally, and the right image is the same MOC frame but with the brightness inverted to simulate the approximate lighting conditions of the Viking image.Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps: The image was

  11. 3D palmprint and hand imaging system based on full-field composite color sinusoidal fringe projection technique.

    Science.gov (United States)

    Zhang, Zonghua; Huang, Shujun; Xu, Yongjia; Chen, Chao; Zhao, Yan; Gao, Nan; Xiao, Yanjun

    2013-09-01

    Palmprint and hand shape, as two kinds of important biometric characteristics, have been widely studied and applied to human identity recognition. The existing research is based mainly on 2D images, which lose the third-dimensional information. The biological features extracted from 2D images are distorted by pressure and rolling, so the subsequent feature matching and recognition are inaccurate. This paper presents a method to acquire accurate 3D shapes of palmprint and hand by projecting full-field composite color sinusoidal fringe patterns and the corresponding color texture information. A 3D imaging system is designed to capture and process the full-field composite color fringe patterns on hand surface. Composite color fringe patterns having the optimum three fringe numbers are generated by software and projected onto the surface of human hand by a digital light processing projector. From another viewpoint, a color CCD camera captures the deformed fringe patterns and saves them for postprocessing. After compensating for the cross talk and chromatic aberration between color channels, three fringe patterns are extracted from three color channels of a captured composite color image. Wrapped phase information can be calculated from the sinusoidal fringe patterns with high precision. At the same time, the absolute phase of each pixel is determined by the optimum three-fringe selection method. After building up the relationship between absolute phase map and 3D shape data, the 3D palmprint and hand are obtained. Color texture information can be directly captured or demodulated from the captured composite fringe pattern images. Experimental results show that the proposed method and system can yield accurate 3D shape and color texture information of the palmprint and hand shape.

  12. Color Image Secret Watermarking Erase and Write Algorithm Based on SIFT

    Science.gov (United States)

    Qu, Jubao

    The use of adaptive characteristics of SIFT, image features, the implementation of the write, erase operations on Extraction and color image hidden watermarking. From the experimental results, this algorithm has better imperceptibility and at the same time, is robust against geometric attacks and common signal processing.

  13. Simultaneous determination of color additives tartrazine and allura red in food products by digital image analysis.

    Science.gov (United States)

    Vidal, Maider; Garcia-Arrona, Rosa; Bordagaray, Ane; Ostra, Miren; Albizu, Gorka

    2018-07-01

    A method based on digital image is described to quantify tartrazine (E102), yellow, and allura red (E129) colorants in food samples. HPLC is the habitual method of reference used for colorant separation and quantification, but it is expensive, time-consuming and it uses solvents, sometimes toxic. By a flatbed scanner, which can be found in most laboratories, images of mixtures of colorants can be taken in microtitration plates. Only 400 µL of sample are necessary and up to 92 samples can be measured together in the same image acquisition. A simple-to-obtain color fingerprint is obtained by converting the original RGB image into other color spaces and individual PLS models are built for each colorant. In this study, root mean square errors of 3.3 and 3.0 for tartrazine and 1.1 and 1.2 for allura red have been obtained for cross-validation and external validation respectively. Results for repeatability and reproducibility are under 12%. These results are slightly worse but comparable to the ones obtained by HPLC. The applicability of both methodologies to real food samples has proven to give the same result, even in the presence of a high concentration of an interfering species, provided that this interference is included in the image analysis calibration model. Considering the colorant content found in most samples this should not be a problem though and, in consequence, the method could be extended to different food products. Values of LODs of 1.8 mg L -1 and 0.6 mg L -1 for tartrazine and allura red have been obtained by image analysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Phenomenological marine snow model for optical underwater image simulation: Applications to color restoration

    OpenAIRE

    Boffety , Matthieu; Galland , Frédéric

    2012-01-01

    International audience; Optical imaging plays an important role in oceanic science and engineering. However, the design of optical systems and image processing techniques for subsea environment are challenging tasks due to water turbidity. Marine snow is notably a major source of image degradation as it creates white bright spots that may strongly impact the performance of image processing methods. In this context, it is necessary to have a tool to foresee the behavior of these methods in mar...

  15. Online prediction of organileptic data for snack food using color images

    Science.gov (United States)

    Yu, Honglu; MacGregor, John F.

    2004-11-01

    In this paper, a study for the prediction of organileptic properties of snack food in real-time using RGB color images is presented. The so-called organileptic properties, which are properties based on texture, taste and sight, are generally measured either by human sensory response or by mechanical devices. Neither of these two methods can be used for on-line feedback control in high-speed production. In this situation, a vision-based soft sensor is very attractive. By taking images of the products, the samples remain untouched and the product properties can be predicted in real time from image data. Four types of organileptic properties are considered in this study: blister level, toast points, taste and peak break force. Wavelet transform are applied on the color images and the averaged absolute value for each filtered image is used as texture feature variable. In order to handle the high correlation among the feature variables, Partial Least Squares (PLS) is used to regress the extracted feature variables against the four response variables.

  16. A Fast, Background-Independent Retrieval Strategy for Color Image Databases

    National Research Council Canada - National Science Library

    Das, M; Draper, B. A; Lim, W. J; Manmatha, R; Riseman, E. M

    1996-01-01

    .... The method is fast and has low storage overhead. Good retrieval results are obtained with multi-colored query objects even when they occur in arbitrary sizes, rotations and locations in the database images...

  17. Extending Whole Slide Imaging: Color Darkfield Internal Reflection Illumination (DIRI for Biological Applications.

    Directory of Open Access Journals (Sweden)

    Yoshihiro Kawano

    Full Text Available Whole slide imaging (WSI is a useful tool for multi-modal imaging, and in our work, we have often combined WSI with darkfield microscopy. However, traditional darkfield microscopy cannot use a single condenser to support high- and low-numerical-aperture objectives, which limits the modality of WSI. To overcome this limitation, we previously developed a darkfield internal reflection illumination (DIRI microscope using white light-emitting diodes (LEDs. Although the developed DIRI is useful for biological applications, substantial problems remain to be resolved. In this study, we propose a novel illumination technique called color DIRI. The use of three-color LEDs dramatically improves the capability of the system, such that color DIRI (1 enables optimization of the illumination color; (2 can be combined with an oil objective lens; (3 can produce fluorescence excitation illumination; (4 can adjust the wavelength of light to avoid cell damage or reactions; and (5 can be used as a photostimulator. These results clearly illustrate that the proposed color DIRI can significantly extend WSI modalities for biological applications.

  18. An area efficient readout architecture for photon counting color imaging

    International Nuclear Information System (INIS)

    Lundgren, Jan; O'Nils, Mattias; Oelmann, Bengt; Norlin, Boerje; Abdalla, Suliman

    2007-01-01

    The introduction of several energy levels, namely color imaging, in photon counting X-ray image sensors is a trade-off between circuit complexity and spatial resolution. In this paper, we propose a pixel architecture that has full resolution for the intensity and uses sub-sampling for the energy spectrum. The results show that this sub-sampling pixel architecture produces images with an image quality which is, on average, 2.4 dB (PSNR) higher than those for a single energy range architecture and with half the circuit complexity of that for a full sampling architecture

  19. A Non-blind Color Image Watermarking Scheme Resistent Against Geometric Attacks

    Directory of Open Access Journals (Sweden)

    A. Ghafoor

    2012-12-01

    Full Text Available A non-blind color image watermarking scheme using principle component analysis, discrete wavelet transform and singular value decomposition is proposed. The color components are uncorrelated using principle component analysis. The watermark is embedded into the singular values of discrete wavelet transformed sub-band associated with principle component containing most of the color information. The scheme was tested against various attacks (including histogram equalization, rotation, Gaussian noise, scaling, cropping, Y-shearing, X-shearing, median filtering, affine transformation, translation, salt & pepper, sharpening, to check robustness. The results of proposed scheme are compared with state-of-the-art existing color watermarking schemes using normalized correlation coefficient and peak signal to noise ratio. The simulation results show that proposed scheme is robust and imperceptible.

  20. Preliminary study on the correlation between color measurement of dyed polyester and its image files

    Science.gov (United States)

    Park, Y. K.; Park, Y. C.

    2017-10-01

    As the internet becomes more popular, buyers send image files to manufacturers instead of sending swatches. However, this method may cause problems because different from the monitor between the buyer and the manufacturer, and also there is a problem depending on the light source. In order to overcome these problems, we investigated the relationship between color measurement values of dyed fabrics and RGB values of image files. The RGB values of image files tended to decrease with increasing dye concentration in all three colors. Correlation between RGB values and a*, b* values was observed at low concentration, but there was little correlation at high concentration. In the case of yellow color, there is no correlation between the L*a*b* values obtained from the dyed fabric and RGB values obtained from the image file.

  1. Digital data storage of core image using high resolution full color core scanner; Kokaizodo full color scanner wo mochiita core image no digital ka

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, W; Ujo, S; Osato, K; Takasugi, S [Geothermal Energy Research and Development Co. Ltd., Tokyo (Japan)

    1996-05-01

    This paper reports on digitization of core images by using a new type core scanner system. This system consists of a core scanner unit (equipped with a CCD camera), a personal computer and ancillary devices. This is a modification of the old type system, with measurable core length made to 100 cm/3 scans, and resolution enhanced to 5100 pixels/m (1024 pixels/m in the old type). The camera was changed to that of a color specification, and the A/D conversion was improved to 24-bit full color. As a result of carrying out a detail reproduction test on digital images of this core scanner, it was found that objects can be identified at a level of about the size of pixels constituting the image in the case when the best contrast is obtained between the objects and the background, and that in an evaluation test on visibility of concaves and convexes on core surface, reproducibility is not very good in large concaves and convexes. 2 refs., 6 figs.

  2. Dermatological Feasibility of Multimodal Facial Color Imaging Modality for Cross-Evaluation of Facial Actinic Keratosis

    Science.gov (United States)

    Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo

    2010-01-01

    Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462

  3. Color Image Enhancement Using Multiscale Retinex Based on Particle Swarm Optimization Method

    Science.gov (United States)

    Matin, F.; Jeong, Y.; Kim, K.; Park, K.

    2018-01-01

    This paper introduces, a novel method for the image enhancement using multiscale retinex and practical swarm optimization. Multiscale retinex is widely used image enhancement technique which intemperately pertains on parameters such as Gaussian scales, gain and offset, etc. To achieve the privileged effect, the parameters need to be tuned manually according to the image. In order to handle this matter, a developed retinex algorithm based on PSO has been used. The PSO method adjusted the parameters for multiscale retinex with chromaticity preservation (MSRCP) attains better outcome to compare with other existing methods. The experimental result indicates that the proposed algorithm is an efficient one and not only provides true color loyalty in low light conditions but also avoid color distortion at the same time.

  4. Medical Image Segmentation using the HSI color space and Fuzzy Mathematical Morphology

    Science.gov (United States)

    Gasparri, J. P.; Bouchet, A.; Abras, G.; Ballarin, V.; Pastore, J. I.

    2011-12-01

    Diabetic retinopathy is the most common cause of blindness among the active population in developed countries. An early ophthalmologic examination followed by proper treatment can prevent blindness. The purpose of this work is develop an automated method for segmentation the vasculature in retinal images in order to assist the expert in the evolution of a specific treatment or in the diagnosis of a potential pathology. Since the HSI space has the ability to separate the intensity of the intrinsic color information, its use is recommended for the digital processing images when they are affected by lighting changes, characteristic of the images under study. By the application of color filters, is achieved artificially change the tone of blood vessels, to better distinguish them from the bottom. This technique, combined with the application of fuzzy mathematical morphology tools as the Top-Hat transformation, creates images of the retina, where vascular branches are markedly enhanced over the original. These images provide the visualization of blood vessels by the specialist.

  5. Medical Image Segmentation using the HSI color space and Fuzzy Mathematical Morphology

    International Nuclear Information System (INIS)

    Gasparri, J P; Bouchet, A; Abras, G; Ballarin, V; Pastore, J I

    2011-01-01

    Diabetic retinopathy is the most common cause of blindness among the active population in developed countries. An early ophthalmologic examination followed by proper treatment can prevent blindness. The purpose of this work is develop an automated method for segmentation the vasculature in retinal images in order to assist the expert in the evolution of a specific treatment or in the diagnosis of a potential pathology. Since the HSI space has the ability to separate the intensity of the intrinsic color information, its use is recommended for the digital processing images when they are affected by lighting changes, characteristic of the images under study. By the application of color filters, is achieved artificially change the tone of blood vessels, to better distinguish them from the bottom. This technique, combined with the application of fuzzy mathematical morphology tools as the Top-Hat transformation, creates images of the retina, where vascular branches are markedly enhanced over the original. These images provide the visualization of blood vessels by the specialist.

  6. Hyperspectral image reconstruction using RGB color for foodborne pathogen detection on agar plates

    Science.gov (United States)

    Yoon, Seung-Chul; Shin, Tae-Sung; Park, Bosoon; Lawrence, Kurt C.; Heitschmidt, Gerald W.

    2014-03-01

    This paper reports the latest development of a color vision technique for detecting colonies of foodborne pathogens grown on agar plates with a hyperspectral image classification model that was developed using full hyperspectral data. The hyperspectral classification model depended on reflectance spectra measured in the visible and near-infrared spectral range from 400 and 1,000 nm (473 narrow spectral bands). Multivariate regression methods were used to estimate and predict hyperspectral data from RGB color values. The six representative non-O157 Shiga-toxin producing Eschetichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) were grown on Rainbow agar plates. A line-scan pushbroom hyperspectral image sensor was used to scan 36 agar plates grown with pure STEC colonies at each plate. The 36 hyperspectral images of the agar plates were divided in half to create training and test sets. The mean Rsquared value for hyperspectral image estimation was about 0.98 in the spectral range between 400 and 700 nm for linear, quadratic and cubic polynomial regression models and the detection accuracy of the hyperspectral image classification model with the principal component analysis and k-nearest neighbors for the test set was up to 92% (99% with the original hyperspectral images). Thus, the results of the study suggested that color-based detection may be viable as a multispectral imaging solution without much loss of prediction accuracy compared to hyperspectral imaging.

  7. Development of an adaptive bilateral filter for evaluating color image difference

    Science.gov (United States)

    Wang, Zhaohui; Hardeberg, Jon Yngve

    2012-04-01

    Spatial filtering, which aims to mimic the contrast sensitivity function (CSF) of the human visual system (HVS), has previously been combined with color difference formulae for measuring color image reproduction errors. These spatial filters attenuate imperceptible information in images, unfortunately including high frequency edges, which are believed to be crucial in the process of scene analysis by the HVS. The adaptive bilateral filter represents a novel approach, which avoids the undesirable loss of edge information introduced by CSF-based filtering. The bilateral filter employs two Gaussian smoothing filters in different domains, i.e., spatial domain and intensity domain. We propose a method to decide the parameters, which are designed to be adaptive to the corresponding viewing conditions, and the quantity and homogeneity of information contained in an image. Experiments and discussions are given to support the proposal. A series of perceptual experiments were conducted to evaluate the performance of our approach. The experimental sample images were reproduced with variations in six image attributes: lightness, chroma, hue, compression, noise, and sharpness/blurriness. The Pearson's correlation values between the model-predicted image difference and the observed difference were employed to evaluate the performance, and compare it with that of spatial CIELAB and image appearance model.

  8. LSB-based Steganography Using Reflected Gray Code for Color Quantum Images

    Science.gov (United States)

    Li, Panchi; Lu, Aiping

    2018-02-01

    At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.

  9. Pixel Color Clustering of Multi-Temporally Acquired Digital Photographs of a Rice Canopy by Luminosity-Normalization and Pseudo-Red-Green-Blue Color Imaging

    Directory of Open Access Journals (Sweden)

    Ryoichi Doi

    2014-01-01

    Full Text Available Red-green-blue (RGB channels of RGB digital photographs were loaded with luminosity-adjusted R, G, and completely white grayscale images, respectively (RGwhtB method, or R, G, and R + G (RGB yellow grayscale images, respectively (RGrgbyB method, to adjust the brightness of the entire area of multi-temporally acquired color digital photographs of a rice canopy. From the RGwhtB or RGrgbyB pseudocolor image, cyan, magenta, CMYK yellow, black, L*, a*, and b* grayscale images were prepared. Using these grayscale images and R, G, and RGB yellow grayscale images, the luminosity-adjusted pixels of the canopy photographs were statistically clustered. With the RGrgbyB and the RGwhtB methods, seven and five major color clusters were given, respectively. The RGrgbyB method showed clear differences among three rice growth stages, and the vegetative stage was further divided into two substages. The RGwhtB method could not clearly discriminate between the second vegetative and midseason stages. The relative advantages of the RGrgbyB method were attributed to the R, G, B, magenta, yellow, L*, and a* grayscale images that contained richer information to show the colorimetrical differences among objects than those of the RGwhtB method. The comparison of rice canopy colors at different time points was enabled by the pseudocolor imaging method.

  10. Saliency of color image derivatives: a comparison between computational models and human perception

    NARCIS (Netherlands)

    Vazquez, E.; Gevers, T.; Lucassen, M.; van de Weijer, J.; Baldrich, R.

    2010-01-01

    In this paper, computational methods are proposed to compute color edge saliency based on the information content of color edges. The computational methods are evaluated on bottom-up saliency in a psychophysical experiment, and on a more complex task of salient object detection in real-world images.

  11. The Colour and Stereo Surface Imaging System (CaSSIS) for the ExoMars Trace Gas Orbiter

    Science.gov (United States)

    Thomas, N.; Cremonese, G.; Ziethe, R.; Gerber, M.; Brändli, M.; Bruno, G.; Erismann, M.; Gambicorti, L.; Gerber, T.; Ghose, K.; Gruber, M.; Gubler, P.; Mischler, H.; Jost, J.; Piazza, D.; Pommerol, A.; Rieder, M.; Roloff, V.; Servonet, A.; Trottmann, W.; Uthaicharoenpong, T.; Zimmermann, C.; Vernani, D.; Johnson, M.; Pelò, E.; Weigel, T.; Viertl, J.; De Roux, N.; Lochmatter, P.; Sutter, G.; Casciello, A.; Hausner, T.; Ficai Veltroni, I.; Da Deppo, V.; Orleanski, P.; Nowosielski, W.; Zawistowski, T.; Szalai, S.; Sodor, B.; Tulyakov, S.; Troznai, G.; Banaskiewicz, M.; Bridges, J.C.; Byrne, S.; Debei, S.; El-Maarry, M. R.; Hauber, E.; Hansen, C.J.; Ivanov, A.; Keszthelyil, L.; Kirk, Randolph L.; Kuzmin, R.; Mangold, N.; Marinangeli, L.; Markiewicz, W. J.; Massironi, M.; McEwen, A.S.; Okubo, Chris H.; Tornabene, L.L.; Wajer, P.; Wray, J.J.

    2017-01-01

    The Colour and Stereo Surface Imaging System (CaSSIS) is the main imaging system onboard the European Space Agency’s ExoMars Trace Gas Orbiter (TGO) which was launched on 14 March 2016. CaSSIS is intended to acquire moderately high resolution (4.6 m/pixel) targeted images of Mars at a rate of 10–20 images per day from a roughly circular orbit 400 km above the surface. Each image can be acquired in up to four colours and stereo capability is foreseen by the use of a novel rotation mechanism. A typical product from one image acquisition will be a 9.5 km×∼45 km">9.5 km×∼45 km9.5 km×∼45 km swath in full colour and stereo in one over-flight of the target thereby reducing atmospheric influences inherent in stereo and colour products from previous high resolution imagers. This paper describes the instrument including several novel technical solutions required to achieve the scientific requirements.

  12. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    Science.gov (United States)

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. On-line measurement of crystalline color by color-image processing system; Gazo shori system wo mochiita kessho no online iro sokutei

    Energy Technology Data Exchange (ETDEWEB)

    Okayasu, S.; Katayama, M.; Shinohara, T. [Ajinomoto Co. Inc., Tokyo (Japan)

    1996-01-20

    Aiming for the stable operation and the rationalization of factory plant, the color-image processing has been tried to introduce into the on-line system to measure the crystalline color of L-Lysine in its refining process. Because the practical spectro-photometry was used to be employed by manual measurement. In this paper, the calculation formula of the transmittance by spectrophotometry is theoretically introduced by analyzing the relation of Lambert-Beer`s law of luminous transparency with the Kubelka-Munk`s function of the luminous dispersion using color image data. The parameters of the calculation formula were decided by actual measurement, so that the formula with accuracy value of {plus_minus}3% elucidated the possible estimation of transmittance by spectrophotometry. The system was tested on a commercial plant, and some issues are discussed. 8 refs., 8 figs., 3 tabs.

  14. The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites

    Science.gov (United States)

    Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.

    2016-07-01

    The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also

  15. NeuroSeek dual-color image processing infrared focal plane array

    Science.gov (United States)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  16. Color appearance for photorealistic image synthesis

    Science.gov (United States)

    Marini, Daniele; Rizzi, Alessandro; Rossi, Maurizio

    2000-12-01

    Photorealistic Image Synthesis is a relevant research and application field in computer graphics, whose aim is to produce synthetic images that are undistinguishable from real ones. Photorealism is based upon accurate computational models of light material interaction, that allow us to compute the spectral intensity light field of a geometrically described scene. The fundamental methods are ray tracing and radiosity. While radiosity allows us to compute the diffuse component of the emitted and reflected light, applying ray tracing in a two pass solution we can also cope with non diffuse properties of the model surfaces. Both methods can be implemented to generate an accurate photometric distribution of light of the simulated environment. A still open problem is the visualization phase, whose purpose is to display the final result of the simulated mode on a monitor screen or on a printed paper. The tone reproduction problem consists of finding the best solution to compress the extended dynamic range of the computed light field into the limited range of the displayable colors. Recently some scholars have addressed this problem considering the perception stage of image formation, so including a model of the human visual system in the visualization process. In this paper we present a working hypothesis to solve the tone reproduction problem of synthetic image generation, integrating Retinex perception model into the photo realistic image synthesis context.

  17. The Athena Mars Rover Science Payload

    Science.gov (United States)

    Squyes, S. W.; Arvidson, R.; Bell, J. F., III; Carr, M.; Christensen, P.; DesMarais, D.; Economou, T.; Gorevan, S.; Klingelhoefer, G.; Haskin, L.

    1998-01-01

    The Mars Surveyor missions that will be launched in April of 2001 will include a highly capable rover that is a successor to the Mars Pathfinder mission's Sojourner rover. The design goals for this rover are a total traverse distance of at least 10 km and a total lifetime of at least one Earth year. The rover's job will be to explore a site in Mars' ancient terrain, searching for materials likely to preserve a record of ancient martian water, climate, and possibly biology. The rover will collect rock and soil samples, and will store them for return to Earth by a subsequent Mars Surveyor mission in 2005. The Athena Mars rover science payload is the suite of scientific instruments and sample collection tools that will be used to perform this job. The specific science objectives that NASA has identified for the '01 rover payload are to: (1) Provide color stereo imaging of martian surface environments, and remotely-sensed point discrimination of mineralogical composition. (2) Determine the elemental and mineralogical composition of martian surface materials. (3) Determine the fine-scale textural properties of these materials. (4) Collect and store samples. The Athena payload has been designed to meet these objectives. The focus of the design is on field operations: making sure the rover can locate, characterize, and collect scientifically important samples in a dusty, dirty, real-world environment. The topography, morphology, and mineralogy of the scene around the rover will be revealed by Pancam/Mini-TES, an integrated imager and IR spectrometer. Pancam views the surface around the rover in stereo and color. It uses two high-resolution cameras that are identical in most respects to the rover's navigation cameras. The detectors are low-power, low-mass active pixel sensors with on-chip 12-bit analog-to-digital conversion. Filters provide 8-12 color spectral bandpasses over the spectral region from 0.4 to 1.1 micron Narrow-angle optics provide an angular resolution of 0

  18. A novel hybrid color image encryption algorithm using two complex chaotic systems

    Science.gov (United States)

    Wang, Leyuan; Song, Hongjun; Liu, Ping

    2016-02-01

    Based on complex Chen and complex Lorenz systems, a novel color image encryption algorithm is proposed. The larger chaotic ranges and more complex behaviors of complex chaotic systems, which compared with real chaotic systems could additionally enhance the security and enlarge key space of color image encryption. The encryption algorithm is comprised of three step processes. In the permutation process, the pixels of plain image are scrambled via two-dimensional and one-dimensional permutation processes among RGB channels individually. In the diffusion process, the exclusive-or (XOR for short) operation is employed to conceal pixels information. Finally, the mixing RGB channels are used to achieve a multilevel encryption. The security analysis and experimental simulations demonstrate that the proposed algorithm is large enough to resist the brute-force attack and has excellent encryption performance.

  19. A combination chaotic system and application in color image encryption

    Science.gov (United States)

    Parvaz, R.; Zarebnia, M.

    2018-05-01

    In this paper, by using Logistic, Sine and Tent systems we define a combination chaotic system. Some properties of the chaotic system are studied by using figures and numerical results. A color image encryption algorithm is introduced based on new chaotic system. Also this encryption algorithm can be used for gray scale or binary images. The experimental results of the encryption algorithm show that the encryption algorithm is secure and practical.

  20. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    OpenAIRE

    Tominaga Shoji; Plataniotis KonstantinosN; Trémeau Alain

    2008-01-01

    Abstract The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the mos...

  1. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD......) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation in the CIE L*a*b* color space. The metric takes luminance reduction, color distortion and loss...

  2. An effective image classification method with the fusion of invariant feature and a new color descriptor

    Science.gov (United States)

    Mansourian, Leila; Taufik Abdullah, Muhamad; Nurliyana Abdullah, Lili; Azman, Azreen; Mustaffa, Mas Rina

    2017-02-01

    Pyramid Histogram of Words (PHOW), combined Bag of Visual Words (BoVW) with the spatial pyramid matching (SPM) in order to add location information to extracted features. However, different PHOW extracted from various color spaces, and they did not extract color information individually, that means they discard color information, which is an important characteristic of any image that is motivated by human vision. This article, concatenated PHOW Multi-Scale Dense Scale Invariant Feature Transform (MSDSIFT) histogram and a proposed Color histogram to improve the performance of existing image classification algorithms. Performance evaluation on several datasets proves that the new approach outperforms other existing, state-of-the-art methods.

  3. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques: an overview and a request for scientific inputs.

    Science.gov (United States)

    Muller, Jan-Peter; Gwinner, Klaus; van Gasselt, Stephan; Ivanov, Anton; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Yershov, Vladimir; Sidirpoulos, Panagiotis; Kim, Jungrack

    2014-05-01

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 7 years, especially in 3D imaging of surface shape (down to resolutions of 10cm) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement [Orloff et al., 2011] or the sublimation of sub-surface ice revealed by meteoritic impact [Byrne et al., 2009] as well as examine geophysical phenomena, such as surface roughness on different length scales. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004 the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25m nadir images) with 87% coverage with images ≤25m and more than 65% useful for stereo mapping (e.g. atmosphere sufficiently clear). It has been demonstrated [Gwinner et al., 2010] that HRSC has the highest possible planimetric accuracy of ≤25m and is well co-registered with MOLA, which represents the global 3D reference frame. HRSC 3D and terrain-corrected image products therefore represent the best available 3D reference data for Mars. NASA began imaging the surface of Mars, initially from flybys in the 1960s with the first orbiter with images ≤100m in the late 1970s from Viking Orbiter. The most recent orbiter to begin imaging in November 2006 is the NASA MRO which has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≡20cm) and ≡5% from CTX (≡6m) in stereo. Unfortunately, for most of these NASA images, especially MGS, MO, VO and HiRISE their accuracy of georeferencing is often worse than the quality of Mars reference data from HRSC. This reduces their value for analysing

  4. Using color management in color document processing

    Science.gov (United States)

    Nehab, Smadar

    1995-04-01

    Color Management Systems have been used for several years in Desktop Publishing (DTP) environments. While this development hasn't matured yet, we are already experiencing the next generation of the color imaging revolution-Device Independent Color for the small office/home office (SOHO) environment. Though there are still open technical issues with device independent color matching, they are not the focal point of this paper. This paper discusses two new and crucial aspects in using color management in color document processing: the management of color objects and their associated color rendering methods; a proposal for a precedence order and handshaking protocol among the various software components involved in color document processing. As color peripherals become affordable to the SOHO market, color management also becomes a prerequisite for common document authoring applications such as word processors. The first color management solutions were oriented towards DTP environments whose requirements were largely different. For example, DTP documents are image-centric, as opposed to SOHO documents that are text and charts centric. To achieve optimal reproduction on low-cost SOHO peripherals, it is critical that different color rendering methods are used for the different document object types. The first challenge in using color management of color document processing is the association of rendering methods with object types. As a result of an evolutionary process, color matching solutions are now available as application software, as driver embedded software and as operating system extensions. Consequently, document processing faces a new challenge, the correct selection of the color matching solution while avoiding duplicate color corrections.

  5. 'McMurdo' Panorama from Spirit's 'Winter Haven' (Color Stereo)

    Science.gov (United States)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA01905 [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA01905 This 360-degree view, called the 'McMurdo' panorama, comes from the panoramic camera (Pancam) on NASA's Mars Exploration Rover Spirit. From April through October 2006, Spirit has stayed on a small hill known as 'Low Ridge.' There, the rover's solar panels are tilted toward the sun to maintain enough solar power for Spirit to keep making scientific observations throughout the winter on southern Mars. This view of the surroundings from Spirit's 'Winter Haven' is presented as a stereo anaglyph to show the scene three-dimensionally when viewed through red-blue glasses (with the red lens on the left). Oct. 26, 2006, marks Spirit's 1,000th sol of what was planned as a 90-sol mission. (A sol is a Martian day, which lasts 24 hours, 39 minutes, 35 seconds). The rover has lived through the most challenging part of its second Martian winter. Its solar power levels are rising again. Spring in the southern hemisphere of Mars will begin in early 2007. Before that, the rover team hopes to start driving Spirit again toward scientifically interesting places in the 'Inner Basin' and 'Columbia Hills' inside Gusev crater. The McMurdo panorama is providing team members with key pieces of scientific and topographic information for choosing where to continue Spirit's exploration adventure. The Pancam began shooting component images of this panorama during Spirit's sol 814 (April 18, 2006) and completed the part shown here on sol 932 (Aug. 17, 2006). The panorama was acquired using all 13 of the Pancam's color filters, using lossless compression for the red and blue stereo filters, and only modest levels of compression on the remaining filters. The overall panorama consists of 1,449 Pancam images and represents a raw data volume of nearly 500 megabytes. It is thus the largest, highest-fidelity view of Mars

  6. Uniform color space analysis of LACIE image products

    Science.gov (United States)

    Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.

    1979-01-01

    The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.

  7. Multi-color imaging of magnetic Co/Pt heterostructures

    Directory of Open Access Journals (Sweden)

    Felix Willems

    2017-01-01

    Full Text Available We present an element specific and spatially resolved view of magnetic domains in Co/Pt heterostructures in the extreme ultraviolet spectral range. Resonant small-angle scattering and coherent imaging with Fourier-transform holography reveal nanoscale magnetic domain networks via magnetic dichroism of Co at the M2,3 edges as well as via strong dichroic signals at the O2,3 and N6,7 edges of Pt. We demonstrate for the first time simultaneous, two-color coherent imaging at a free-electron laser facility paving the way for a direct real space access to ultrafast magnetization dynamics in complex multicomponent material systems.

  8. Obtention of tumor volumes in PET images stacks using techniques of colored image segmentation; Obtencao de volumes tumorais em pilhas de imagens PET usando tecnicas de segmentacao de imagens coloridas

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Jose W.; Lopes Filho, Ferdinand J., E-mail: jose.wilson@recife.ifpe.edu.br [Instituto Federal de Educacao e Tecnologia de Pernambuco (IFPE) Recife, PE (Brazil); Vieira, Igor F., E-mail: igoradiologia@gmail.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Lima, Fernando R.A.; Cordeiro, Landerson P., E-mail: leoxofisico@gmail.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-NE), Recife, PE (Brazil)

    2014-07-01

    This work demonstrated step by step how to segment color images of the chest of an adult in order to separate the tumor volume without significantly changing the values of the components R (Red), G (Green) and B (blue) of the colors of the pixels. For having information which allow to build color map you need to segment and classify the colors present at appropriate intervals in images. The used segmentation technique is to select a small rectangle with color samples in a given region and then erase with a specific color called 'rubber' the other regions of image. The tumor region was segmented into one of the images available and the procedure is displayed in tutorial format. All necessary computational tools have been implemented in DIP (Digital Image Processing), software developed by the authors. The results obtained, in addition to permitting the construction the colorful map of the distribution of the concentration of activity in PET images will also be useful in future work to enter tumors in voxel phantoms in order to perform dosimetric assessments.

  9. Illuminant color estimation based on pigmentation separation from human skin color

    Science.gov (United States)

    Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi

    2015-03-01

    Human has the visual system called "color constancy" that maintains the perceptive colors of same object across various light sources. The effective method of color constancy algorithm was proposed to use the human facial color in a digital color image, however, this method has wrong estimation results by the difference of individual facial colors. In this paper, we present the novel color constancy algorithm based on skin color analysis. The skin color analysis is the method to separate the skin color into the components of melanin, hemoglobin and shading. We use the stationary property of Japanese facial color, and this property is calculated from the components of melanin and hemoglobin. As a result, we achieve to propose the method to use subject's facial color in image and not depend on the individual difference among Japanese facial color.

  10. Colors in Mind: A Novel Paradigm to Investigate Pure Color Imagery

    OpenAIRE

    Wantz, Andrea Laura; Borst, Grégoire; Mast, Fred; Lobmaier, Janek

    2015-01-01

    Mental color imagery abilities are commonly measured using paradigms that involve naming, judging, or comparing the colors of visual mental images of well-known objects (e.g., “Is a sunflower darker yellow than a lemon”?). Although this approach is widely used in patient studies, differences in the ability to perform such color comparisons might simply reflect participants’ general knowledge of object colors rather than their ability to generate accurate visual mental images of the colors of ...

  11. Bit-level quantum color image encryption scheme with quantum cross-exchange operation and hyper-chaotic system

    Science.gov (United States)

    Zhou, Nanrun; Chen, Weiwei; Yan, Xinyu; Wang, Yunqian

    2018-06-01

    In order to obtain higher encryption efficiency, a bit-level quantum color image encryption scheme by exploiting quantum cross-exchange operation and a 5D hyper-chaotic system is designed. Additionally, to enhance the scrambling effect, the quantum channel swapping operation is employed to swap the gray values of corresponding pixels. The proposed color image encryption algorithm has larger key space and higher security since the 5D hyper-chaotic system has more complex dynamic behavior, better randomness and unpredictability than those based on low-dimensional hyper-chaotic systems. Simulations and theoretical analyses demonstrate that the presented bit-level quantum color image encryption scheme outperforms its classical counterparts in efficiency and security.

  12. Categorization and Searching of Color Images Using Mean Shift Algorithm

    Directory of Open Access Journals (Sweden)

    Prakash PANDEY

    2009-07-01

    Full Text Available Now a day’s Image Searching is still a challenging problem in content based image retrieval (CBIR system. Most CBIR system operates on all images without pre-sorting the images. The image search result contains many unrelated image. The aim of this research is to propose a new object based indexing system Based on extracting salient region representative from the image, categorizing the image into different types and search images that are similar to given query images.In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique, Dominant objects are obtained by performing region grouping of segmented thumbnails. The category for an image is generated automatically by analyzing the image for the presence of a dominant object. The images in the database are clustered based on region feature similarity using Euclidian distance. Placing an image into a category can help the user to navigate retrieval results more effectively. Extensive experimental results illustrate excellent performance.

  13. The Viking mission search for life on Mars

    Science.gov (United States)

    Klein, H. P.; Lederberg, J.; Rich, A.; Horowitz, N. H.; Oyama, V. I.; Levin, G. V.

    1976-01-01

    The scientific payload on the Viking Mars landers is described. Shortly after landing, two facsimile cameras capable of stereoscopic imaging will scan the landing site area in black and white, color, and infrared to reveal gross evidence of past or present living systems. A wide range mass spectrometer will record a complete mass spectrum for soil samples from mass 12 to mass 200 every 10.3 sec. Three experiments based on different assumptions on the nature of life on Mars, if it exists, will be carried out by the bio-lab. A pyrolytic release experiment is designed to measure photosynthetic or dark fixation of carbon dioxide or carbon monoxide into organic compounds. A labelled release experiment will test for metabolic activity during incubation of a surface sample moistened with a solution of radioactively labelled simple organic compounds. A gas exchange experiment will detect changes in the gaseous medium surrounding a soil sample as the result of metabolic activity. The hardware, function, and terrestrial test results of the bio-lab experiments are discussed.

  14. Image Simulation and Assessment of the Colour and Spatial Capabilities of the Colour and Stereo Surface Imaging System (CaSSIS) on the ExoMars Trace Gas Orbiter

    Science.gov (United States)

    Tornabene, Livio L.; Seelos, Frank P.; Pommerol, Antoine; Thomas, Nicholas; Caudill, C. M.; Becerra, Patricio; Bridges, John C.; Byrne, Shane; Cardinale, Marco; Chojnacki, Matthew; Conway, Susan J.; Cremonese, Gabriele; Dundas, Colin M.; El-Maarry, M. R.; Fernando, Jennifer; Hansen, Candice J.; Hansen, Kayle; Harrison, Tanya N.; Henson, Rachel; Marinangeli, Lucia; McEwen, Alfred S.; Pajola, Maurizio; Sutton, Sarah S.; Wray, James J.

    2018-02-01

    This study aims to assess the spatial and visible/near-infrared (VNIR) colour/spectral capabilities of the 4-band Colour and Stereo Surface Imaging System (CaSSIS) aboard the ExoMars 2016 Trace Grace Orbiter (TGO). The instrument response functions for the CaSSIS imager was used to resample spectral libraries, modelled spectra and to construct spectrally ( i.e., in I/F space) and spatially consistent simulated CaSSIS image cubes of various key sites of interest and for ongoing scientific investigations on Mars. Coordinated datasets from Mars Reconnaissance Orbiter (MRO) are ideal, and specifically used for simulating CaSSIS. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) provides colour information, while the Context Imager (CTX), and in a few cases the High-Resolution Imaging Science Experiment (HiRISE), provides the complementary spatial information at the resampled CaSSIS unbinned/unsummed pixel resolution (4.6 m/pixel from a 400-km altitude). The methodology used herein employs a Gram-Schmidt spectral sharpening algorithm to combine the ˜18-36 m/pixel CRISM-derived CaSSIS colours with I/F images primarily derived from oversampled CTX images. One hundred and eighty-one simulated CaSSIS 4-colour image cubes (at 18-36 m/pixel) were generated (including one of Phobos) based on CRISM data. From these, thirty-three "fully"-simulated image cubes of thirty unique locations on Mars ( i.e., with 4 colour bands at 4.6 m/pixel) were made. All simulated image cubes were used to test both the colour capabilities of CaSSIS by producing standard colour RGB images, colour band ratio composites (CBRCs) and spectral parameters. Simulated CaSSIS CBRCs demonstrated that CaSSIS will be able to readily isolate signatures related to ferrous (Fe2+) iron- and ferric (Fe3+) iron-bearing deposits on the surface of Mars, ices and atmospheric phenomena. Despite the lower spatial resolution of CaSSIS when compared to HiRISE, the results of this work demonstrate that Ca

  15. A study of glasses-type color CGH using a color filter considering reduction of blurring

    Science.gov (United States)

    Iwami, Saki; Sakamoto, Yuji

    2009-02-01

    We have developed a glasses-type color computer generated hologram (CGH) by using a color filter. The proposed glasses consist of two "lenses" made of overlapping holograms and color filters. The holograms, which are calculated to reconstruct images in each primary color, are divided to small areas, which we called cells, and superimposed on one hologram. In the same way, colors of the filter correspond to the hologram cells. We can configure it very simply without a complex optical system, and the configuration yields a small and light weight system suitable for glasses. When the cell is small enough, the colors are mixed and reconstructed color images are observed. In addition, color expression of reconstruction images improves, too. However, using small cells blurrs reconstructed images because of the following reasons: (1) interference between cells because of the correlation with the cells, and (2) reduction of resolution caused by the size of the cell hologram. We are investigating in order to make a hologram that has high resolution reconstructed color images without ghost images. In this paper, we discuss (1) the details of the proposed glasses-type color CGH, (2) appropriate cell size for an eye system, (3) effects of cell shape on the reconstructed images, and (4) a new method to reduce the blurring of the images.

  16. Nearest patch matching for color image segmentation supporting neural network classification in pulmonary tuberculosis identification

    Science.gov (United States)

    Rulaningtyas, Riries; Suksmono, Andriyan B.; Mengko, Tati L. R.; Saptawati, Putri

    2016-03-01

    Pulmonary tuberculosis is a deadly infectious disease which occurs in many countries in Asia and Africa. In Indonesia, many people with tuberculosis disease are examined in the community health center. Examination of pulmonary tuberculosis is done through sputum smear with Ziehl - Neelsen staining using conventional light microscope. The results of Ziehl - Neelsen staining will give effect to the appearance of tuberculosis (TB) bacteria in red color and sputum background in blue color. The first examination is to detect the presence of TB bacteria from its color, then from the morphology of the TB bacteria itself. The results of Ziehl - Neelsen staining in sputum smear give the complex color images, so that the clinicians have difficulty when doing slide examination manually because it is time consuming and needs highly training to detect the presence of TB bacteria accurately. The clinicians have heavy workload to examine many sputum smear slides from the patients. To assist the clinicians when reading the sputum smear slide, this research built computer aided diagnose with color image segmentation, feature extraction, and classification method. This research used K-means clustering with patch technique to segment digital sputum smear images which separated the TB bacteria images from the background images. This segmentation method gave the good accuracy 97.68%. Then, feature extraction based on geometrical shape of TB bacteria was applied to this research. The last step, this research used neural network with back propagation method to classify TB bacteria and non TB bacteria images in sputum slides. The classification result of neural network back propagation are learning time (42.69±0.02) second, the number of epoch 5000, error rate of learning 15%, learning accuracy (98.58±0.01)%, and test accuracy (96.54±0.02)%.

  17. Color Image of Snow White Trenches and Scraping

    Science.gov (United States)

    2008-01-01

    This image was acquired by NASA's Phoenix Mars Lander's Surface Stereo Imager on the 31st Martian day of the mission, or Sol 31 (June 26, 2008), after the May 25, 2008 landing. This image shows the trenches informally called 'Snow White 1' (left), 'Snow White 2' (right), and within the Snow White 2 trench, the smaller scraping area called 'Snow White 3.' The Snow White 3 scraped area is about 5 centimeters (2 inches) deep. The dug and scraped areas are within the diggiing site called 'Wonderland.' The Snow White trenches and scraping prove that scientists can take surface soil samples, subsurface soil samples, and icy samples all from one unit. Scientists want to test samples to determine if some ice in the soil may have been liquid in the past during warmer climate cycles. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver

  18. Semantic Songket Image Search with Cultural Computing of Symbolic Meaning Extraction and Analytical Aggregation of Color and Shape Features

    Directory of Open Access Journals (Sweden)

    Desi Amirullah

    2015-06-01

    Full Text Available The term "Songket" comes from the Malay word "Sungkit", which means "to hook" or "to gouge". Every motifs names and variations was derived from plants and animals as source of inspiration to create many patterns of songket. Each of songket patterns have a philosophy in form of rhyme that refers to the nature of the sources of songket patterns and that philosophy reflects to the beliefs and values of Malay culture. In this research, we propose a system to facilitate an understanding of songket and the philosophy as a way to conserve Songket culture. We propose a system which is able to collect information in image songket motif variations based on feature extraction methods. On each image songket motif variations, we extracted philosophy of rhyme into impressions, and extracting color features of songket images using a histogram 3D-Color Vector quantization (3D-CVQ, shape feature extraction songket image using HU Moment invariants. Then, we created an image search based on impressions, and impressions search based on image. We use techniques of search based on color, shape and aggregation (combination of colors and shapes. The experiment using impression as query : 1 Result based on color, the average value of true 7.3, total score 41.9, 2 Result based on shape, the average value of true 3, total score 16.4, 3 Result based on aggregation, the average value of true 3, total score 17.4. While based using Image Query : 1 Result based on color, the average precision 95%, 2 Result based on shape, average precision 43.3%, 3 Based aggregation, the average precision 73.3%. From our experiments, it can be concluded that the best search system using query impression and query image is based on the color. Keyword : Image Search, Philosophy, impression, Songket, cultural computing, Feature Extraction, Analytical aggregation.

  19. Six-color intravital two-photon imaging of brain tumors and their dynamic microenvironment

    Directory of Open Access Journals (Sweden)

    Clément eRicard

    2014-02-01

    Full Text Available The majority of intravital studies on brain tumor in living animal so far rely on dual color imaging. We describe here a multiphoton imaging protocol to dynamically characterize the interactions between six cellular components in a living mouse. We applied this methodology to a clinically relevant glioblastoma multiforme (GBM model designed in reporter mice with targeted cell populations labeled by fluorescent proteins of different colors. This model permitted us to make non-invasive longitudinal and multi-scale observations of cell-to-cell interactions. We provide examples of such 5D (x,y,z,t,color images acquired on a daily basis from volumes of interest, covering most of the mouse parietal cortex at subcellular resolution. Spectral deconvolution allowed us to accurately separate of each cell population as well as some components of the extracellular matrix. The technique represents a powerful tool for investigating how tumor progression is influenced by the interactions of tumor cells with host cells and the extracellular matrix micro-environment. It will be especially valuable for evaluating neuro-oncological drug efficacy and target specificity. The imaging protocol provided here can be easily translated to other mouse models of neuropathologies, and should also be of fundamental interest for investigations in other areas of systems biology.

  20. Six-color intravital two-photon imaging of brain tumors and their dynamic microenvironment.

    Science.gov (United States)

    Ricard, Clément; Debarbieux, Franck Christian

    2014-01-01

    The majority of intravital studies on brain tumor in living animal so far rely on dual color imaging. We describe here a multiphoton imaging protocol to dynamically characterize the interactions between six cellular components in a living mouse. We applied this methodology to a clinically relevant glioblastoma multiforme (GBM) model designed in reporter mice with targeted cell populations labeled by fluorescent proteins of different colors. This model permitted us to make non-invasive longitudinal and multi-scale observations of cell-to-cell interactions. We provide examples of such 5D (x,y,z,t,color) images acquired on a daily basis from volumes of interest, covering most of the mouse parietal cortex at subcellular resolution. Spectral deconvolution allowed us to accurately separate each cell population as well as some components of the extracellular matrix. The technique represents a powerful tool for investigating how tumor progression is influenced by the interactions of tumor cells with host cells and the extracellular matrix micro-environment. It will be especially valuable for evaluating neuro-oncological drug efficacy and target specificity. The imaging protocol provided here can be easily translated to other mouse models of neuropathologies, and should also be of fundamental interest for investigations in other areas of systems biology.

  1. Color vision test

    Science.gov (United States)

    ... present from birth) color vision problems: Achromatopsia -- complete color blindness , seeing only shades of gray Deuteranopia -- difficulty telling ... Vision test - color; Ishihara color vision test Images Color blindness tests References Bowling B. Hereditary fundus dystrophies. In: ...

  2. Demosaicing and Superresolution for Color Filter Array via Residual Image Reconstruction and Sparse Representation

    OpenAIRE

    Sun, Guangling

    2012-01-01

    A framework of demosaicing and superresolution for color filter array (CFA) via residual image reconstruction and sparse representation is presented.Given the intermediate image produced by certain demosaicing and interpolation technique, a residual image between the final reconstruction image and the intermediate image is reconstructed using sparse representation.The final reconstruction image has richer edges and details than that of the intermediate image. Specifically, a generic dictionar...

  3. Tomographic Particle Image Velocimetry using Smartphones and Colored Shadows

    KAUST Repository

    Aguirre-Pablo, Andres A.

    2017-06-12

    We demonstrate the viability of using four low-cost smartphone cameras to perform Tomographic PIV. We use colored shadows to imprint two or three different time-steps on the same image. The back-lighting is accomplished with three sets of differently-colored pulsed LEDs. Each set of Red, Green & Blue LEDs is shone on a diffuser screen facing each of the cameras. We thereby record the RGB-colored shadows of opaque suspended particles, rather than the conventionally used scattered light. We subsequently separate the RGB color channels, to represent the separate times, with preprocessing to minimize noise and cross-talk. We use commercially available Tomo-PIV software for the calibration, 3-D particle reconstruction and particle-field correlations, to obtain all three velocity components in a volume. Acceleration estimations can be done thanks to the triple pulse illumination. Our test flow is a vortex ring produced by forcing flow through a circular orifice, using a flexible membrane, which is driven by a pressurized air pulse. Our system is compared to a commercial stereoscopic PIV system for error estimations. We believe this proof of concept experiment will make this technique available for education, industry and scientists for a fraction of the hardware cost needed for traditional Tomo-PIV.

  4. Development of the science instrument CLUPI: the close-up imager on board the ExoMars rover

    Science.gov (United States)

    Josset, J.-L.; Beauvivre, S.; Cessa, V.; Martin, P.

    2017-11-01

    First mission of the Aurora Exploration Programme of ESA, ExoMars will demonstrate key flight and in situ enabling technologies, and will pursue fundamental scientific investigations. Planned for launch in 2013, ExoMars will send a robotic rover to the surface of Mars. The Close-UP Imager (CLUPI) instrument is part of the Pasteur Payload of the rover fixed on the robotic arm. It is a robotic replacement of one of the most useful instruments of the field geologist: the hand lens. Imaging of surfaces of rocks, soils and wind drift deposits at high resolution is crucial for the understanding of the geological context of any site where the Pasteur rover may be active on Mars. At the resolution provided by CLUPI (approx. 15 micrometer/pixel), rocks show a plethora of surface and internal structures, to name just a few: crystals in igneous rocks, sedimentary structures such as bedding, fracture mineralization, secondary minerals, details of the surface morphology, sedimentary bedding, sediment components, surface marks in sediments, soil particles. It is conceivable that even textures resulting from ancient biological activity can be visualized, such as fine lamination due to microbial mats (stromatolites) and textures resulting from colonies of filamentous microbes, potentially present in sediments and in palaeocavitites in any rock type. CLUPI is a complete imaging system, consisting of an APS (Active Pixel Sensor) camera with 27° FOV optics. The sensor is sensitive to light between 400 and 900 nm with 12 bits digitization. The fixed focus optics provides well focused images of 4 cm x 2.4 cm rock area at a distance of about 10 cm. This challenging camera system, less than 200g, is an independent scientific instrument linked to the rover on board computer via a SpaceWire interface. After the science goals and specifications presentation, the development of this complex high performance miniaturized imaging system will be described.

  5. Mobile Robot Localization by Remote Viewing of a Colored Cylinder

    Science.gov (United States)

    Volpe, R.; Litwin, T.; Matthies, L.

    1995-01-01

    A system was developed for the Mars Pathfinder rover in which the rover checks its position by viewing the angle back to a colored cylinder with different colors for different angles. The rover determines distance by the apparent size of the cylinder.

  6. Color blending based on viewpoint and surface normal for generating images from any viewpoint using multiple cameras

    OpenAIRE

    Mukaigawa, Yasuhiro; Genda, Daisuke; Yamane, Ryo; Shakunaga, Takeshi

    2003-01-01

    A color blending method for generating a high quality image of human motion is presented. The 3D (three-dimensional) human shape is reconstructed by volume intersection and expressed as a set of voxels. As each voxel is observed as different colors from different cameras, voxel color needs to be assigned appropriately from several colors. We present a color blending method, which calculates voxel color from a linear combination of the colors observed by multiple cameras. The weightings in the...

  7. Preferred and acceptable color gamut for reproducing natural image content

    NARCIS (Netherlands)

    Sekulovski, D.; de Volder, R.J.; Heynderickx, I.E.J.

    2009-01-01

    The preferred and maximally acceptable chroma for natural images of mainly one hue is determined using both a tuning and a paired-comparison task. The results clearly show the need for wide-gamut displays, but also the limited acceptance of over-saturated colors. Preference in chroma is dominated by

  8. Mars Pathfinder

    Science.gov (United States)

    Murdin, P.

    2000-11-01

    First of NASA's Discovery missions. Launched in December 1996 and arrived at Mars on 4 July 1997. Mainly intended as a technology demonstration mission. Used airbags to cushion the landing on Mars. The Carl Sagan Memorial station returned images of an ancient flood plain in Ares Vallis. The 10 kg Sojourner rover used an x-ray spectrometer to study the composition of rocks and travelled about 100 ...

  9. Rotation invariants from Gaussian-Hermite moments of color images

    Czech Academy of Sciences Publication Activity Database

    Yang, B.; Suk, Tomáš; Flusser, Jan; Shi, Z.; Chen, X.

    2018-01-01

    Roč. 143, č. 1 (2018), s. 282-291 ISSN 0165-1684 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : Color images * Object recognition * Rotation invariants * Gaussian–Hermite moments * Joint invariants Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.110, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/suk-0479748.pdf

  10. Human visual system-based color image steganography using the contourlet transform

    Science.gov (United States)

    Abdul, W.; Carré, P.; Gaborit, P.

    2010-01-01

    We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.

  11. Image simulation and assessment of the colour and spatial capabilities of the Colour and Stereo Surface Imaging System (CaSSIS) on the ExoMars Trace Gas Orbiter

    Science.gov (United States)

    Tornabene, Livio L.; Seelos, Frank P.; Pommerol, Antoine; Thomas, Nicolas; Caudill, Christy M.; Becerra, Patricio; Bridges, John C.; Byrne, Shane; Cardinale, Marco; Chojnacki, Matthew; Conway, Susan J.; Cremonese, Gabriele; Dundas, Colin M.; El-Maarry, M. R.; Fernando, Jennifer; Hansen, Candice J.; Hansen, Kayle; Harrison, Tanya N.; Henson, Rachel; Marinangeli, Lucia; McEwen, Alfred S.; Pajola, Maurizio; Sutton, Sarah S.; Wray, James J.

    2018-01-01

    This study aims to assess the spatial and visible/near-infrared (VNIR) colour/spectral capabilities of the 4-band Colour and Stereo Surface Imaging System (CaSSIS) aboard the ExoMars 2016 Trace Grace Orbiter (TGO). The instrument response functions for the CaSSIS imager was used to resample spectral libraries, modelled spectra and to construct spectrally (i.e., in I/F space) and spatially consistent simulated CaSSIS image cubes of various key sites of interest and for ongoing scientific investigations on Mars. Coordinated datasets from Mars Reconnaissance Orbiter (MRO) are ideal, and specifically used for simulating CaSSIS. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) provides colour information, while the Context Imager (CTX), and in a few cases the High-Resolution Imaging Science Experiment (HiRISE), provides the complementary spatial information at the resampled CaSSIS unbinned/unsummed pixel resolution (4.6 m/pixel from a 400-km altitude). The methodology used herein employs a Gram-Schmidt spectral sharpening algorithm to combine the ∼18–36 m/pixel CRISM-derived CaSSIS colours with I/F images primarily derived from oversampled CTX images. One hundred and eighty-one simulated CaSSIS 4-colour image cubes (at 18–36 m/pixel) were generated (including one of Phobos) based on CRISM data. From these, thirty-three “fully”-simulated image cubes of thirty unique locations on Mars (i.e., with 4 colour bands at 4.6 m/pixel) were made. All simulated image cubes were used to test both the colour capabilities of CaSSIS by producing standard colour RGB images, colour band ratio composites (CBRCs) and spectral parameters. Simulated CaSSIS CBRCs demonstrated that CaSSIS will be able to readily isolate signatures related to ferrous (Fe2+) iron- and ferric (Fe3+) iron-bearing deposits on the surface of Mars, ices and atmospheric phenomena. Despite the lower spatial resolution of CaSSIS when compared to HiRISE, the results of this work demonstrate that

  12. Characterization of Fluorescent Proteins for Three- and Four-Color Live-Cell Imaging in S. cerevisiae.

    Science.gov (United States)

    Higuchi-Sanabria, Ryo; Garcia, Enrique J; Tomoiaga, Delia; Munteanu, Emilia L; Feinstein, Paul; Pon, Liza A

    2016-01-01

    Saccharomyces cerevisiae are widely used for imaging fluorescently tagged protein fusions. Fluorescent proteins can easily be inserted into yeast genes at their chromosomal locus, by homologous recombination, for expression of tagged proteins at endogenous levels. This is especially useful for incorporation of multiple fluorescent protein fusions into a single strain, which can be challenging in organisms where genetic manipulation is more complex. However, the availability of optimal fluorescent protein combinations for 3-color imaging is limited. Here, we have characterized a combination of fluorescent proteins, mTFP1/mCitrine/mCherry for multicolor live cell imaging in S. cerevisiae. This combination can be used with conventional blue dyes, such as DAPI, for potential four-color live cell imaging.

  13. Multi-color imaging of fluorescent nanodiamonds in living HeLa cells using direct electron-beam excitation.

    Science.gov (United States)

    Nawa, Yasunori; Inami, Wataru; Lin, Sheng; Kawata, Yoshimasa; Terakawa, Susumu; Fang, Chia-Yi; Chang, Huan-Cheng

    2014-03-17

    Multi-color, high spatial resolution imaging of fluorescent nanodiamonds (FNDs) in living HeLa cells has been performed with a direct electron-beam excitation-assisted fluorescence (D-EXA) microscope. In this technique, fluorescent materials are directly excited with a focused electron beam and the resulting cathodoluminescence (CL) is detected with nanoscale resolution. Green- and red-light-emitting FNDs were employed for two-color imaging, which were observed simultaneously in the cells with high spatial resolution. This technique could be applied generally for multi-color immunostaining to reveal various cell functions. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Segmenting texts from outdoor images taken by mobile phones using color features

    Science.gov (United States)

    Liu, Zongyi; Zhou, Hanning

    2011-01-01

    Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.

  15. Color feature extraction of HER2 Score 2+ overexpression on breast cancer using Image Processing

    Directory of Open Access Journals (Sweden)

    Muhimmah Izzati

    2018-01-01

    Full Text Available One of the major challenges in the development of early diagnosis to assess HER2 status is recognized in the form of Gold Standard. The accuracy, validity and refraction of the Gold Standard HER2 methods are widely used in laboratory (Perez, et al., 2014. Method determining the status of HER2 (human epidermal growth factor receptor 2 is affected by reproductive problems and not reliable in predicting the benefit from anti-HER2 therapy (Nuciforo, et al., 2016. We extracted color features by methods adopting Statistics-based segmentation using a continuous-scale naïve Bayes approach. In this study, there were three parts of the main groups, namely image acquisition, image segmentation, and image testing. The stages of image acquisition consisted of image data collection and color deconvolution. The stages of image segmentation consisted of color features, classifier training, classifier prediction, and skeletonization. The stages of image testing were image testing, expert validation, and expert validation results. Area segmentation of the membrane is false positive and false negative. False positive and false negative from area are called the area of system failure. The failure of the system can be validated by experts that the results of segmentation region is not membrane HER2 (noise and the segmentation of the cytoplasm region. The average from 40 data of HER2 score 2+ membrane images show that 75.13% of the area is successfully recognized by the system.

  16. Color View 'Dodo' and 'Baby Bear' Trenches

    Science.gov (United States)

    2008-01-01

    NASA's Phoenix Mars Lander's Surface Stereo Imager took this image on Sol 14 (June 8, 2008), the 14th Martian day after landing. It shows two trenches dug by Phoenix's Robotic Arm. Soil from the right trench, informally called 'Baby Bear,' was delivered to Phoenix's Thermal and Evolved-Gas Analyzer, or TEGA, on Sol 12 (June 6). The following several sols included repeated attempts to shake the screen over TEGA's oven number 4 to get fine soil particles through the screen and into the oven for analysis. The trench on the left is informally called 'Dodo' and was dug as a test. Each of the trenches is about 9 centimeters (3 inches) wide. This view is presented in approximately true color by combining separate exposures taken through different filters of the Surface Stereo Imager. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  17. Color measurement of tea leaves at different drying periods using hyperspectral imaging technique.

    Science.gov (United States)

    Xie, Chuanqi; Li, Xiaoli; Shao, Yongni; He, Yong

    2014-01-01

    This study investigated the feasibility of using hyperspectral imaging technique for nondestructive measurement of color components (ΔL*, Δa* and Δb*) and classify tea leaves during different drying periods. Hyperspectral images of tea leaves at five drying periods were acquired in the spectral region of 380-1030 nm. The three color features were measured by the colorimeter. Different preprocessing algorithms were applied to select the best one in accordance with the prediction results of partial least squares regression (PLSR) models. Competitive adaptive reweighted sampling (CARS) and successive projections algorithm (SPA) were used to identify the effective wavelengths, respectively. Different models (least squares-support vector machine [LS-SVM], PLSR, principal components regression [PCR] and multiple linear regression [MLR]) were established to predict the three color components, respectively. SPA-LS-SVM model performed excellently with the correlation coefficient (rp) of 0.929 for ΔL*, 0.849 for Δa*and 0.917 for Δb*, respectively. LS-SVM model was built for the classification of different tea leaves. The correct classification rates (CCRs) ranged from 89.29% to 100% in the calibration set and from 71.43% to 100% in the prediction set, respectively. The total classification results were 96.43% in the calibration set and 85.71% in the prediction set. The result showed that hyperspectral imaging technique could be used as an objective and nondestructive method to determine color features and classify tea leaves at different drying periods.

  18. Color measurement of tea leaves at different drying periods using hyperspectral imaging technique.

    Directory of Open Access Journals (Sweden)

    Chuanqi Xie

    Full Text Available This study investigated the feasibility of using hyperspectral imaging technique for nondestructive measurement of color components (ΔL*, Δa* and Δb* and classify tea leaves during different drying periods. Hyperspectral images of tea leaves at five drying periods were acquired in the spectral region of 380-1030 nm. The three color features were measured by the colorimeter. Different preprocessing algorithms were applied to select the best one in accordance with the prediction results of partial least squares regression (PLSR models. Competitive adaptive reweighted sampling (CARS and successive projections algorithm (SPA were used to identify the effective wavelengths, respectively. Different models (least squares-support vector machine [LS-SVM], PLSR, principal components regression [PCR] and multiple linear regression [MLR] were established to predict the three color components, respectively. SPA-LS-SVM model performed excellently with the correlation coefficient (rp of 0.929 for ΔL*, 0.849 for Δa*and 0.917 for Δb*, respectively. LS-SVM model was built for the classification of different tea leaves. The correct classification rates (CCRs ranged from 89.29% to 100% in the calibration set and from 71.43% to 100% in the prediction set, respectively. The total classification results were 96.43% in the calibration set and 85.71% in the prediction set. The result showed that hyperspectral imaging technique could be used as an objective and nondestructive method to determine color features and classify tea leaves at different drying periods.

  19. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques: A Final Report on the very variable surface of Mars

    Science.gov (United States)

    Muller, Jan-Peter; Sidiropoulos, Panagiotis; Tao, Yu; Putri, Kiky; Campbell, Jacqueline; Xiong, Si-Ting; Gwinner, Klaus; Willner, Konrad; Fanara, Lida; Waehlisch, Marita; Walter, Sebastian; Schreiner, Bjoern; Steikert, Ralf; Ivanov, Anton; Cantini, Federico; Wardlaw, Jessica; Sprinks, James; Houghton, Robert; Kim, Jung-Rack

    2017-04-01

    There has been a revolution in 3D surface imaging of Mars over the last 12 years with systematic stereoscopy from HRSC. Digital Terrain Models (DTMs) and OrthoRectified Images (ORIs) have been produced for almost 50% of the Martian surface. DLR, together with the HRSC science team, produced 3D HRSC mosaic products for large regions comprising around 100 individual strips per region (MC-11E/W). UCL processed full coverage of DTMs over the South Polar Residual Cap (SPRC) and started work on the North Polar Layered Deposits (NPLD). The iMars project has been exploiting this unique set of 3D products as a basemap to co-register NASA imagery going back to the 1970s. UCL have developed an automated processing chain for CTX and HiRISE 3D processing to densify the global HRSC dataset with DTMs down to 18m and 75cm respectively using a modification of the open source NASA Ames Stereo Pipeline [1]. 1542 CTX DTMs + ORIs were processed using the Microsoft Azure® cloud and an in-house linux cluster. It is planned to process around 10% of the total HiRISE stereo-DTMs before the end of the project. A fully Automated Co-Registration and Orthorectification (ACRO) system has been developed at UCL and applied to the production of around some 15,000 NASA images. These were co-registered co-registered to a HRSC pixel (typically 12.5m/pixel) and orthorectified to HRSC DTMs of 50-150m spacing [2] over MC-11E/W. All of these new products images are viewable through an OGC-compliant webGIS developed at FUB,. This includes tools for viewing temporal sequences of co-registered ORIs over the same area [3]. Corresponding MARSIS and SHARAD data can be viewed through a QGIS plugin made publicly available [4]. An automated data mining system has been developed at UCL [5] for change detection to search and classify features in images going back to Viking Orbiter of IFoV ≤100m. In parallel, a citizen science project at Nottingham University [6] has defined training samples for classification of

  20. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    Science.gov (United States)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  1. Mars Stratigraphy Mission

    Science.gov (United States)

    Budney, C. J.; Miller, S. L.; Cutts, J. A.

    2000-01-01

    The Mars Stratigraphy Mission lands a rover on the surface of Mars which descends down a cliff in Valles Marineris to study the stratigraphy. The rover carries a unique complement of instruments to analyze and age-date materials encountered during descent past 2 km of strata. The science objective for the Mars Stratigraphy Mission is to identify the geologic history of the layered deposits in the Valles Marineris region of Mars. This includes constraining the time interval for formation of these deposits by measuring the ages of various layers and determining the origin of the deposits (volcanic or sedimentary) by measuring their composition and imaging their morphology.

  2. Detection of Crater Rims by Image Analysis in Very High Resolution Images of Mars, Mercury and the Moon

    Science.gov (United States)

    Pina, P.; Marques, J. S.; Bandeira, L.

    2013-12-01

    The adaptive nature of automated crater detection algorithms permits achieving a high level of autonomous detections in different surfaces and consequently becoming an important tool in the update of crater catalogues. Nevertheless, the available approaches assume all craters as circular and only provide as output the radius and location of each crater. However, the delineation of impact craters following the local variability of the rims is also important to, among others, evaluate their degree of degradation or preservation, namely those studies related to ancient climate analysis. This contour determination is normally prepared in a manual way but can advantageously be done by image analysis methods, eliminating subjectivity and allowing large scale delineations. We have recently proposed a pair of independent approaches to tackle with this problem, one based on processing the crater image in polar coordinates [1], the other using morphological operators [2], which achieved a good degree of success on very high resolution images from Mars [3-4], but where enough room for improvement was still available. Thus, the integration of both approaches into a single one, suppressing the individual drawbacks of the previous approaches, permitted to strength the detection procedure. We describe now the novel sequence of processing that we have built and test it intensively in a wider variety of planetary surfaces, namely, those of Mars, Mercury and the Moon, using the very high resolution images provided by HiRISE, MDIS and LROC cameras. The automated delineations of the craters are compared to a ground-truth reference (manually delineated contours), so a quantitative evaluation can be performed; on a dataset constituted by more than one thousand impact craters we have obtained a global high delineation rate. The breakdown by crater size on each surface is performed. The whole processing procedure works on raster images and also delivers the output in the same image format

  3. A NEW TECHNIQUE BASED ON CHAOTIC STEGANOGRAPHY AND ENCRYPTION TEXT IN DCT DOMAIN FOR COLOR IMAGE

    Directory of Open Access Journals (Sweden)

    MELAD J. SAEED

    2013-10-01

    Full Text Available Image steganography is the art of hiding information into a cover image. This paper presents a new technique based on chaotic steganography and encryption text in DCT domain for color image, where DCT is used to transform original image (cover image from spatial domain to frequency domain. This technique used chaotic function in two phases; firstly; for encryption secret message, second; for embedding in DCT cover image. With this new technique, good results are obtained through satisfying the important properties of steganography such as: imperceptibility; improved by having mean square error (MSE, peak signal to noise ratio (PSNR and normalized correlation (NC, to phase and capacity; improved by encoding the secret message characters with variable length codes and embedding the secret message in one level of color image only.

  4. Two Moons and the Pleiades from Mars

    Science.gov (United States)

    2005-01-01

    [figure removed for brevity, see original site] Inverted image of two moons and the Pleiades from Mars Taking advantage of extra solar energy collected during the day, NASA's Mars Exploration Rover Spirit recently settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. In this view, the Pleiades, a star cluster also known as the 'Seven Sisters,' is visible in the lower left corner. The bright star Aldebaran and some of the stars in the constellation Taurus are visible on the right. Spirit acquired this image the evening of martian day, or sol, 590 (Aug. 30, 2005). The image on the right provides an enhanced-contrast view with annotation. Within the enhanced halo of light is an insert of an unsaturated view of Phobos taken a few images later in the same sequence. On Mars, Phobos would be easily visible to the naked eye at night, but would be only about one-third as large as the full Moon appears from Earth. Astronauts staring at Phobos from the surface of Mars would notice its oblong, potato-like shape and that it moves quickly against the background stars. Phobos takes only 7 hours, 39 minutes to complete one orbit of Mars. That is so fast, relative to the 24-hour-and-39-minute sol on Mars (the length of time it takes for Mars to complete one rotation), that Phobos rises in the west and sets in the east. Earth's moon, by comparison, rises in the east and sets in the west. The smaller martian moon, Deimos, takes 30 hours, 12 minutes to complete one orbit of Mars. That orbital period is longer than a martian sol, and so Deimos rises, like most solar system moons, in the east and sets in the west. Scientists will use images of the two moons to better map their orbital positions, learn more about their composition, and monitor the presence of nighttime clouds or haze. Spirit took the five images that make up this composite with the panoramic camera, using the camera's broadband filter, which was designed specifically

  5. MARS: Microarray analysis, retrieval, and storage system

    Directory of Open Access Journals (Sweden)

    Scheideler Marcel

    2005-04-01

    Full Text Available Abstract Background Microarray analysis has become a widely used technique for the study of gene-expression patterns on a genomic scale. As more and more laboratories are adopting microarray technology, there is a need for powerful and easy to use microarray databases facilitating array fabrication, labeling, hybridization, and data analysis. The wealth of data generated by this high throughput approach renders adequate database and analysis tools crucial for the pursuit of insights into the transcriptomic behavior of cells. Results MARS (Microarray Analysis and Retrieval System provides a comprehensive MIAME supportive suite for storing, retrieving, and analyzing multi color microarray data. The system comprises a laboratory information management system (LIMS, a quality control management, as well as a sophisticated user management system. MARS is fully integrated into an analytical pipeline of microarray image analysis, normalization, gene expression clustering, and mapping of gene expression data onto biological pathways. The incorporation of ontologies and the use of MAGE-ML enables an export of studies stored in MARS to public repositories and other databases accepting these documents. Conclusion We have developed an integrated system tailored to serve the specific needs of microarray based research projects using a unique fusion of Web based and standalone applications connected to the latest J2EE application server technology. The presented system is freely available for academic and non-profit institutions. More information can be found at http://genome.tugraz.at.

  6. Recognition memory for colored and black-and-white scenes in normal and color deficient observers (dichromats).

    Science.gov (United States)

    Brédart, Serge; Cornet, Alyssa; Rakic, Jean-Marie

    2014-01-01

    Color deficient (dichromat) and normal observers' recognition memory for colored and black-and-white natural scenes was evaluated through several parameters: the rate of recognition, discrimination (A'), response bias (B"D), response confidence, and the proportion of conscious recollections (Remember responses) among hits. At the encoding phase, 36 images of natural scenes were each presented for 1 sec. Half of the images were shown in color and half in black-and-white. At the recognition phase, these 36 pictures were intermixed with 36 new images. The participants' task was to indicate whether an image had been presented or not at the encoding phase, to rate their level of confidence in his her/his response, and in the case of a positive response, to classify the response as a Remember, a Know or a Guess response. Results indicated that accuracy, response discrimination, response bias and confidence ratings were higher for colored than for black-and-white images; this advantage for colored images was similar in both groups of participants. Rates of Remember responses were not higher for colored images than for black-and-white ones, whatever the group. However, interestingly, Remember responses were significantly more often based on color information for colored than for black-and-white images in normal observers only, not in dichromats.

  7. The Aesthetics of Astrophysics: How to Make Appealing Color-composite Images that Convey the Science

    Science.gov (United States)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; Arcand, Kimberly K.; Watzke, Megan

    2017-05-01

    Astronomy has a rich tradition of using color photography and imaging, for visualization in research as well as for sharing scientific discoveries in formal and informal education settings (i.e., for “public outreach”). In the modern era, astronomical research has benefitted tremendously from electronic cameras that allow data and images to be generated and analyzed in a purely digital form with a level of precision that previously was not possible. Advances in image-processing software have also enabled color-composite images to be made in ways that are much more complex than with darkroom techniques, not only at optical wavelengths but across the electromagnetic spectrum. The Internet has made it possible to rapidly disseminate these images to eager audiences. Alongside these technological advances, there have been gains in understanding how to make images that are scientifically illustrative as well as aesthetically pleasing. Studies have also given insights on how the public interprets astronomical images and how that can be different than professional astronomers. An understanding of these differences will help in the creation of images that are meaningful to both groups. In this invited review, we discuss the techniques behind making color-composite images as well as examine the factors one should consider when doing so, whether for data visualization or public consumption. We also provide a brief history of astronomical imaging with a focus on the origins of the "modern era" during which distribution of high-quality astronomical images to the public is a part of nearly every professional observatory's public outreach. We review relevant research into the expectations and misconceptions that often affect the public's interpretation of these images.

  8. Color Laser Microscope

    Science.gov (United States)

    Awamura, D.; Ode, T.; Yonezawa, M.

    1987-04-01

    A color laser microscope utilizing a new color laser imaging system has been developed for the visual inspection of semiconductors. The light source, produced by three lasers (Red; He-Ne, Green; Ar, Blue; He-Cd), is deflected horizontally by an AOD (Acoustic Optical Deflector) and vertically by a vibration mirror. The laser beam is focused in a small spot which is scanned over the sample at high speed. The light reflected back from the sample is reformed to contain linear information by returning to the original vibration mirror. The linear light is guided to the CCD image sensor where it is converted into a video signal. Individual CCD image sensors are used for each of the three R, G, or B color image signals. The confocal optical system with its laser light source yields a color TV monitor image with no flaring and a much sharper resolution than that of the conventional optical microscope. The AOD makes possible a high speed laser scan and a NTSC or PAL TV video signal is produced in real time without any video memory. Since the light source is composed of R, G, and B laser beams, color separation superior to that of white light illumination is achieved. Because of the photometric linearity of the image detector, the R, G, and B outputs of the system are most suitably used for hue analysis. The CCD linear image sensors in the optical system produce no geometrical distortion, and good color registration is available principally. The output signal can be used for high accuracy line width measuring. The many features of the color laser microscope make it ideally suited for the visual inspection of semiconductor processing. A number of these systems have already been installed in such a capacity. The Color Laser Microscope can also be a very useful tool for the fields of material engineering and biotechnology.

  9. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  10. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    Science.gov (United States)

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...techniques to determine the distances from each pixel to the camera. 14. SUBJECT TERMS unmanned undersea vehicles (UUVs), autonomous ... AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING Jake A. Jones Lieutenant Commander, United States Navy B.S

  11. Automated rice leaf disease detection using color image analysis

    Science.gov (United States)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  12. Effects of chromatic image statistics on illumination induced color differences.

    Science.gov (United States)

    Lucassen, Marcel P; Gevers, Theo; Gijsenij, Arjan; Dekker, Niels

    2013-09-01

    We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.

  13. Tiny Devices Project Sharp, Colorful Images

    Science.gov (United States)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  14. Multi-color pyrometry imaging system and method of operating the same

    Science.gov (United States)

    Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde

    2017-03-21

    A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.

  15. Relationship between color and tannin content in sorghum grain: application of image analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    M Sedghi

    2012-03-01

    Full Text Available The relationship between sorghum grain color and tannin content was reported in several references. In this study, 33 phenotypes of sorghum grain differing in seed characteristics were collected and analyzed by Folin-Ciocalteu method. A computer image analysis method was used to determine the color characteristics of all 33 sorghum phenotypes. Two methods of multiple linear regression and artificial neural network (ANN models were developed to describe tannin content in sorghum grain from three input parameters of color characteristics. The goodness of fit of the models was tested using R², MS error, and bias. The computer image analysis technique was a suitable method to estimate tannin through sorghum grain color strength. Therefore, the color quality of the samples was described according three color parameters: L* (lightness, a* (redness - from green to red and b* (blueness - from blue to yellow. The developed regression and ANN models showed a strong relationship between color and tannin content of samples. The goodness of fit (in terms of R², which corresponds to training the ANN model, showed higher accuracy of prediction of ANN compared with the equation established by the regression method (0.96 vs. 0.88. The ANN models in term of MS error showed lower residuals distribution than that of regression model (0.002 vs. 0.006. The platform of computer image analysis technique and ANN-based model may be used to estimate the tannin content of sorghum.

  16. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Katawas mineral district in Afghanistan: Chapter N in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Katawas mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©AXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA

  17. Astronomy with the Color Blind

    Science.gov (United States)

    Smith, Donald A.; Melrose, Justyn

    2014-01-01

    The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the…

  18. Butterfly wing coloration studied with a novel imaging scatterometer

    Science.gov (United States)

    Stavenga, Doekele

    2010-03-01

    Animal coloration functions for display or camouflage. Notably insects provide numerous examples of a rich variety of the applied optical mechanisms. For instance, many butterflies feature a distinct dichromatism, that is, the wing coloration of the male and the female differ substantially. The male Brimstone, Gonepteryx rhamni, has yellow wings that are strongly UV iridescent, but the female has white wings with low reflectance in the UV and a high reflectance in the visible wavelength range. In the Small White cabbage butterfly, Pieris rapae crucivora, the wing reflectance of the male is low in the UV and high at visible wavelengths, whereas the wing reflectance of the female is higher in the UV and lower in the visible. Pierid butterflies apply nanosized, strongly scattering beads to achieve their bright coloration. The male Pipevine Swallowtail butterfly, Battus philenor, has dorsal wings with scales functioning as thin film gratings that exhibit polarized iridescence; the dorsal wings of the female are matte black. The polarized iridescence probably functions in intraspecific, sexual signaling, as has been demonstrated in Heliconius butterflies. An example of camouflage is the Green Hairstreak butterfly, Callophrys rubi, where photonic crystal domains exist in the ventral wing scales, resulting in a matte green color that well matches the color of plant leaves. The spectral reflection and polarization characteristics of biological tissues can be rapidly and with unprecedented detail assessed with a novel imaging scatterometer-spectrophotometer, built around an elliptical mirror [1]. Examples of butterfly and damselfly wings, bird feathers, and beetle cuticle will be presented. [4pt] [1] D.G. Stavenga, H.L. Leertouwer, P. Pirih, M.F. Wehling, Optics Express 17, 193-202 (2009)

  19. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Bakhud mineral district in Afghanistan: Chapter U in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Bakhud mineral district, which has industrial fluorite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  20. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Uruzgan mineral district in Afghanistan: Chapter V in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Uruzgan mineral district, which has tin and tungsten deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  1. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Baghlan mineral district in Afghanistan: Chapter P in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Baghlan mineral district, which has industrial clay and gypsum deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2006, 2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from

  2. From printed color to image appearance: tool for advertising assessment

    Science.gov (United States)

    Bonanomi, Cristian; Marini, Daniele; Rizzi, Alessandro

    2012-07-01

    We present a methodology to calculate the color appearance of advertising billboards set in indoor and outdoor environments, printed on different types of paper support and viewed under different illuminations. The aim is to simulate the visual appearance of an image printed on a specific support, observed in a certain context and illuminated with a specific source of light. Knowing in advance the visual rendering of an image in different conditions can avoid problems related to its visualization. The proposed method applies a sequence of transformations to convert a four channels image (CMYK) into a spectral one, considering the paper support, then it simulates the chosen illumination, and finally computes an estimation of the appearance.

  3. Automatic Microaneurysm Detection and Characterization Through Digital Color Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Martins, Charles; Veras, Rodrigo; Ramalho, Geraldo; Medeiros, Fatima; Ushizima, Daniela

    2008-08-29

    Ocular fundus images can provide information about retinal, ophthalmic, and even systemic diseases such as diabetes. Microaneurysms (MAs) are the earliest sign of Diabetic Retinopathy, a frequently observed complication in both type 1 and type 2 diabetes. Robust detection of MAs in digital color fundus images is critical in the development of automated screening systems for this kind of disease. Automatic grading of these images is being considered by health boards so that the human grading task is reduced. In this paper we describe segmentation and the feature extraction methods for candidate MAs detection.We show that the candidate MAs detected with the methodology have been successfully classified by a MLP neural network (correct classification of 84percent).

  4. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Takhar mineral district in Afghanistan: Chapter Q in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Takhar mineral district, which has industrial evaporite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  5. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Parwan mineral district in Afghanistan: Chapter CC in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Parwan mineral district, which has gold and copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006, 2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  6. Layers of 'Cabo Frio' in 'Victoria Crater' (False Color)

    Science.gov (United States)

    2006-01-01

    This view of 'Victoria crater' is looking southeast from 'Duck Bay' towards the dramatic promontory called 'Cabo Frio.' The small crater in the right foreground, informally known as 'Sputnik,' is about 20 meters (about 65 feet) away from the rover, the tip of the spectacular, layered, Cabo Frio promontory itself is about 200 meters (about 650 feet) away from the rover, and the exposed rock layers are about 15 meters (about 50 feet) tall. This is an enhanced false color rendering of images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 750-nanometer, 530-nanometer and 430-nanometer filters.

  7. Portable real-time color night vision

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.

    2008-01-01

    We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized

  8. Surface of Mars: the view from the Viking 1 lander

    International Nuclear Information System (INIS)

    Mutch, T.A.; Binder, A.B.; Huck, F.O.; Levinthal, E.C.; Liebes, S. Jr.; Morris, E.C.; Patterson, W.R.; Pollack, J.B.; Sagan, C.; Taylor, G.R.

    1976-01-01

    The first photographs ever returned from the surface of Mars were obtained by two facsimile cameras aboard the Viking 1 lander, including black-and-white and color, 0.12 0 and 0.04 0 resolution, and monoscopic and stereoscopic images. The surface, on the western slopes of Chryse Planitia, is a boulder-strewn deeply reddish desert, with distant eminences--some of which may be the rims of impact craters--surmounted by a pink sky. Both impact and aeolian processes are evident. After dissipation of a small dust cloud stirred by the landing maneuvers, no subsequent signs of movement were detected on the landscape, and nothing has been observed that is indicative of macroscopic biology at this time and place

  9. An optimized color transformation for the analysis of digital images of hematoxylin & eosin stained slides.

    Science.gov (United States)

    Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U

    2015-01-01

    Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.

  10. Sentinel lymph node mapping in minimally invasive surgery: Role of imaging with color-segmented fluorescence (CSF).

    Science.gov (United States)

    Lopez Labrousse, Maite I; Frumovitz, Michael; Guadalupe Patrono, M; Ramirez, Pedro T

    2017-09-01

    Sentinel lymph node mapping, alone or in combination with pelvic lymphadenectomy, is considered a standard approach in staging of patients with cervical or endometrial cancer [1-3]. The goal of this video is to demonstrate the use of indocyanine green (ICG) and color-segmented fluorescence when performing lymphatic mapping in patients with gynecologic malignancies. Injection of ICG is performed in two cervical sites using 1mL (0.5mL superficial and deep, respectively) at the 3 and 9 o'clock position. Sentinel lymph nodes are identified intraoperatively using the Pinpoint near-infrared imaging system (Novadaq, Ontario, CA). Color-segmented fluorescence is used to image different levels of ICG uptake demonstrating higher levels of perfusion. A color key on the side of the monitor shows the colors that coordinate with different levels of ICG uptake. Color-segmented fluorescence may help surgeons identify true sentinel nodes from fatty tissue that, although absorbing fluorescent dye, does not contain true nodal tissue. It is not intended to differentiate the primary sentinel node from secondary sentinel nodes. The key ranges from low levels of ICG uptake (gray) to the highest rate of ICG uptake (red). Bilateral sentinel lymph nodes are identified along the external iliac vessels using both standard and color-segmented fluorescence. No evidence of disease was noted after ultra-staging was performed in each of the sentinel nodes. Use of ICG in sentinel lymph node mapping allows for high bilateral detection rates. Color-segmented fluorescence may increase accuracy of sentinel lymph node identification over standard fluorescent imaging. The following are the supplementary data related to this article. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  12. Color-flow Doppler imaging in suspected extremity venous thrombosis

    International Nuclear Information System (INIS)

    Foley, W.D.; Middleton, W.D.; Lawson, T.L.; Hinson, G.W.; Puller, D.R.

    1987-01-01

    Color-flow Doppler imaging (CFDI) (Quanatum, 5 and 7.5 MHz, linear array) has been performed on 23 extremities (nine positive for venous thrombosis, 14 negative) with venographic correlation. CFDI criteria evaluated were venous color-flow respiratory variation, augmentation, compressibility, valve competence, and intraluminal echogenic filling defects. Both CFDI and venography were evaluated independently and prospectively. CFDI and venography agreed in all six cases of femoral vein thrombosis and eight of nine cases of popliteal vein thrombosis. CFDI was negative in one instance of recanalized popliteal vein thrombosis. Recanalized femoral vein thrombosis was documented in three patients by CFDI when the vein was nonopacified on conventional venography. CFDI provides a rapid and accurate assessment of the femoral popliteal venous system and can distinguish an occluded from a recanalized thrombus. Initial experience with auxiliary subclavian venous thrombus has produced equally accurate results

  13. Triple-color super-resolution imaging of live cells: resolving submicroscopic receptor organization in the plasma membrane.

    Science.gov (United States)

    Wilmes, Stephan; Staufenbiel, Markus; Lisse, Domenik; Richter, Christian P; Beutel, Oliver; Busch, Karin B; Hess, Samuel T; Piehler, Jacob

    2012-05-14

    In living color: efficient intracellular covalent labeling of proteins with a photoswitchable dye using the HaloTag for dSTORM super-resolution imaging in live cells is described. The dynamics of cellular nanostructures at the plasma membrane were monitored with a time resolution of a few seconds. In combination with dual-color FPALM imaging, submicroscopic receptor organization within the context of the membrane skeleton was resolved. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Rotation Invariant Color Retrieval

    OpenAIRE

    Swapna Borde; Udhav Bhosle

    2013-01-01

    The new technique for image retrieval using the color features extracted from images based on LogHistogram is proposed. The proposed technique is compared with Global color histogram and histogram ofcorners .It has been observed that number of histogram bins used for retrieval comparison of proposedtechnique (Log Histogram)is less as compared to Global Color Histogram and Histogram of corners. Theexperimental results on a database of 792 images with 11 classes indicate that proposed method (L...

  15. An optical color image watermarking scheme by using compressive sensing with human visual characteristics in gyrator domain

    Science.gov (United States)

    Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian

    2017-05-01

    A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.

  16. Memory for color reactivates color processing region.

    Science.gov (United States)

    Slotnick, Scott D

    2009-11-25

    Memory is thought to be constructive in nature, where features processed in different cortical regions are synthesized during retrieval. In an effort to support this constructive memory framework, the present functional magnetic resonance imaging study assessed whether memory for color reactivated color processing regions. During encoding, participants were presented with colored and gray abstract shapes. During retrieval, old and new shapes were presented in gray and participants responded 'old-colored', 'old-gray', or 'new'. Within color perception regions, color memory related activity was observed in the left fusiform gyrus, adjacent to the collateral sulcus. A retinotopic mapping analysis indicated this activity occurred within color processing region V8. The present feature specific evidence provides compelling support for a constructive view of memory.

  17. Decision-Based Marginal Total Variation Diffusion for Impulsive Noise Removal in Color Images

    Directory of Open Access Journals (Sweden)

    Hongyao Deng

    2017-01-01

    Full Text Available Impulsive noise removal for color images usually employs vector median filter, switching median filter, the total variation L1 method, and variants. These approaches, however, often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A marginal method to reduce impulsive noise is proposed in this paper that overcomes this limitation that is based on the following facts: (i each channel in a color image is contaminated independently, and contaminative components are independent and identically distributed; (ii in a natural image the gradients of different components of a pixel are similar to one another. This method divides components into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the components are divided into the corrupted and the noise-free components; if the image is corrupted by random-valued impulses, the components are divided into the corrupted, noise-free, and the possibly corrupted components. Components falling into different categories are processed differently. If a component is corrupted, modified total variation diffusion is applied; if it is possibly corrupted, scaled total variation diffusion is applied; otherwise, the component is left unchanged. Simulation results demonstrate its effectiveness.

  18. EVALUATION OF CHROMATICITY COORDINATES SHIFT FOR IMAGE DISPLAYED ON LIQUID CRYSTAL PANELS WITH VARIOUS PROPERTIES ON COLOR REPRODUCTION

    Directory of Open Access Journals (Sweden)

    I. O. Zharinov

    2016-03-01

    Full Text Available Subject of Research.We consider the problem of evaluation of chromaticity coordinates shift for image displayed on liquid crystal panels with various properties on color reproduction. A mathematical model represents the color reproduction characteristics. The spread of the color characteristics of the screens has a statistical nature. Differences of color reproduction for screens are perceived by the observer in the form of different colors and shades that are displayed on the same type of commercially available screens. Color differences are characterized by numerical measure of the difference of colors and can be mathematically compensated. The solution of accounting problem of the statistical nature of the color characteristics spread for the screens has a particular relevance to aviation instrumentation. Method. Evaluation of chromaticity coordinates shift of the image is based on the application of the Grassmann laws of color mixing.Basic data for quantitative calculation of shift are the profiles of two different liquid crystal panels defined by matrixes of scales for components of primary colors (red, green, blue. The calculation is based on solving the system of equations and calculating the color difference in the XY-plane. In general, the calculation can be performed in other color spaces: UV, Lab. The statistical nature of the spread of the color characteristics for the screens is accounted for in the proposed mathematical model based on the interval setting of coordinate values of the color gamut triangle vertices on the set of commercially available samples. Main Results. Carried outresearches result in the mathematical expressions allowing to recalculate values of chromaticity coordinates of the image displayed on various samples of liquid crystal screens. It is shown that the spread of the color characteristics of the screens follows bivariate normal distribution law with the accuracy sufficient for practice. The results of

  19. Physical evaluation of color and monochrome medical displays using an imaging colorimeter

    Science.gov (United States)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2013-03-01

    This paper presents an approach to physical evaluation of color and monochrome medical grade displays using an imaging colorimeter. The purpose of this study was to examine the influence of medical display types, monochrome or color at the same maximum luminance settings, on diagnostic performance. The focus was on the measurements of physical characteristics including spatial resolution and noise performance, which we believed could affect the clinical performance. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between two EIZO displays.

  20. Color correction optimization with hue regularization

    Science.gov (United States)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  1. Detection of Blood Vessels in Color Fundus Images using a Local Radon Transform

    Directory of Open Access Journals (Sweden)

    Reza Pourreza

    2010-09-01

    Full Text Available Introduction: This paper addresses a method for automatic detection of blood vessels in color fundus images which utilizes two main tools: image partitioning and local Radon transform. Material and Methods: The input images are firstly divided into overlapping windows and then the Radon transform is applied to each. The maximum of the Radon transform in each window corresponds to the probable available sub-vessel. To verify the detected sub-vessel, the maximum is compared with a predefined threshold. The verified sub-vessels are reconstructed using the Radon transform information. All detected and reconstructed sub-vessels are finally combined to make the final vessel tree. Results: The algorithm’s performance was evaluated numerically by applying it to 40 images of DRIVE database, a standard retinal image database. The vessels were extracted manually by two physicians. This database was used to test and compare the available and proposed algorithms for vessel detection in color fundus images. By comparing the output of the algorithm with the manual results, the two parameters TPR and FPR were calculated for each image and the average of TPRs and FPRs were used to plot the ROC curve. Discussion and Conclusion: Comparison of the ROC curve of this algorithm with other algorithms demonstrated the high achieved accuracy. Beside the high accuracy, the Radon transform which is integral-based makes the algorithm robust against noise.

  2. San Gabriel Mountains, California, Radar image, color as height

    Science.gov (United States)

    2000-01-01

    This topographic radar image shows the relationship of the urban area of Pasadena, California to the natural contours of the land. The image includes the alluvial plain on which Pasadena and the Jet Propulsion Laboratory sit, and the steep range of the San Gabriel Mountains. The mountain front and the arcuate valley running from upper left to the lower right are active fault zones, along which the mountains are rising. The chaparral-covered slopes above Pasadena are also a prime area for wildfires and mudslides. Hazards from earthquakes, floods and fires are intimately related to the topography in this area. Topographic data and other remote sensing images provide valuable information for assessing and mitigating the natural hazards for cities along the front of active mountain ranges.This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from blue at the lowest elevations to white at the highest elevations. This image contains about 2300 meters (7500 feet) of total relief. White speckles on the face of some of the mountains are holes in the data caused by steep terrain. These will be filled using coverage from an intersecting pass.The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) and the

  3. P1-9: Relationship between Color Shifts in Land's Two-Color Method and Higher- and Lower-Level Visual Information

    Directory of Open Access Journals (Sweden)

    Saki Iwaida

    2012-10-01

    Full Text Available Land's two-color method gives rise to apparent full-color perception, even though only two colors (e.g., red and gray are used. Previous studies indicate that chromatic adaptation, color memory, and inductive effects contribute to the shifts of color perception from real to illusory colors (e.g., Kuriki, 2006 Vision Research 46 3055–3066. This paper investigates the relationship between the color shifts induced by Land images and the skewness of the luminance histogram. In Experiment 1, several Land images are created based on a yellow ball, and the magnitude of the color shifts of the images are measured. The results of Experiment 1 show a significant correlation between the magnitude of the color shifts and skewness, suggesting that skewness is critical for the color shifts. In Experiment 2, we test the hypothesis that color shifts depends on just skewness; the color shifts should be invariant even if the Land images are scrambled. However, the results of Experiment 2 demonstrate that scrambled Land images exhibit less intense color shifts, suggesting that color shifts are determined by the object's overall shape or surface gloss, not just skewness. Taken together, we conclude that both low-level visual processes, such as those associated with luminance histogram skew, and high-level cognitive functions, such as object interpretation or understanding of surface gloss, are involved in the color shift of Land images.

  4. Color preferences change after experience with liked/disliked colored objects.

    Science.gov (United States)

    Strauss, Eli D; Schloss, Karen B; Palmer, Stephen E

    2013-10-01

    How are color preferences formed, and can they be changed by affective experiences with correspondingly colored objects? We examined these questions by testing whether affectively polarized experiences with images of colored objects would cause changes in color preferences. Such changes are implied by the ecological valence theory (EVT), which posits that color preferences are determined by people's average affective responses to correspondingly colored objects (Palmer & Schloss, Proceedings of the National Academy of Sciences, 107, 8877-8882, 2010). Seeing images of strongly liked (and disliked) red and green objects, therefore, should lead to increased (and decreased) preferences for correspondingly colored red and green color patches. Experiment 1 showed that this crossover interaction did occur, but only if participants were required to evaluate their preferences for the colored objects when they saw them. Experiment 2 showed that these overall changes decreased substantially over a 24-h delay, but the degree to which the effect lasted for individuals covaried with the magnitude of the effects immediately after object exposure. Experiment 3 demonstrated a similar, but weaker, effect of affectively biased changes in color preferences when participants did not see, but only imagined, the colored objects. The overall pattern of results indicated that color preferences are not fixed, but rather are shaped by affective experiences with colored objects. Possible explanations for the observed changes in color preferences were considered in terms of associative learning through evaluative conditioning and/or priming of prior knowledge in memory.

  5. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  6. Multi-color phase imaging and sickle cell anemia (Conference Presentation)

    Science.gov (United States)

    Hosseini, Poorya; Zhou, Renjie; Yaqoob, Zahid; So, Peter T. C.

    2016-03-01

    Quantitative phase measurements at multiple wavelengths has created an opportunity for exploring new avenues in phase microscopy such as enhancing imaging-depth (1), measuring hemoglobin concentrations in erythrocytes (2), and more recently in tomographic mapping of the refractive index of live cells (3). To this end, quantitative phase imaging has been demonstrated both at few selected spectral points as well as with high spectral resolution (4,5). However, most of these developed techniques compromise imaging speed, field of view, or the spectral resolution to perform interferometric measurements at multiple colors. In the specific application of quantitative phase in studying blood diseases and red blood cells, current techniques lack the required sensitivity to quantify biological properties of interest at individual cell level. Recently, we have set out to develop a stable quantitative interferometric microscope allowing for measurements of such properties for red cells without compromising field of view or speed of the measurements. The feasibility of the approach will be initially demonstrated in measuring dispersion curves of known solutions, followed by measuring biological properties of red cells in sickle cell anemia. References: 1. Mann CJ, Bingham PR, Paquit VC, Tobin KW. Quantitative phase imaging by three-wavelength digital holography. Opt Express. 2008;16(13):9753-64. 2. Park Y, Yamauchi T, Choi W, Dasari R, Feld MS. Spectroscopic phase microscopy for quantifying hemoglobin concentrations in intact red blood cells. Opt Lett. 2009;34(23):3668-70. 3. Hosseini P, Sung Y, Choi Y, Lue N, Yaqoob Z, So P. Scanning color optical tomography (SCOT). Opt Express. 2015;23(15):19752-62. 4. Jung J-H, Jang J, Park Y. Spectro-refractometry of individual microscopic objects using swept-source quantitative phase imaging. Anal Chem. 2013;85(21):10519-25. 5. Rinehart M, Zhu Y, Wax A. Quantitative phase spectroscopy. Biomed Opt Express. 2012;3(5):958-65.

  7. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kandahar mineral district in Afghanistan: Chapter Z in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kandahar mineral district, which has bauxite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS

  8. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Farah mineral district in Afghanistan: Chapter FF in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Farah mineral district, which has spectral reflectance anomalies indicative of copper, zinc, lead, silver, and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that

  9. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Zarkashan mineral district in Afghanistan: Chapter G in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Zarkashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  10. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Khanneshin mineral district in Afghanistan: Chapter A in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Khanneshin mineral district, which has uranium, thorium, rare-earth-element, and apatite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be

  11. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Nalbandon mineral district in Afghanistan: Chapter L in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nalbandon mineral district, which has lead and zinc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  12. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Balkhab mineral district in Afghanistan: Chapter B in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Balkhab mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match

  13. Preferred skin color enhancement for photographic color reproduction

    Science.gov (United States)

    Zeng, Huanzhao; Luo, Ronnier

    2011-01-01

    Skin tones are the most important colors among the memory color category. Reproducing skin colors pleasingly is an important factor in photographic color reproduction. Moving skin colors toward their preferred skin color center improves the color preference of skin color reproduction. Several methods to morph skin colors to a smaller preferred skin color region has been reported in the past. In this paper, a new approach is proposed to further improve the result of skin color enhancement. An ellipsoid skin color model is applied to compute skin color probabilities for skin color detection and to determine a weight for skin color adjustment. Preferred skin color centers determined through psychophysical experiments were applied for color adjustment. Preferred skin color centers for dark, medium, and light skin colors are applied to adjust skin colors differently. Skin colors are morphed toward their preferred color centers. A special processing is applied to avoid contrast loss in highlight. A 3-D interpolation method is applied to fix a potential contouring problem and to improve color processing efficiency. An psychophysical experiment validates that the method of preferred skin color enhancement effectively identifies skin colors, improves the skin color preference, and does not objectionably affect preferred skin colors in original images.

  14. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Aynak mineral district in Afghanistan: Chapter E in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Aynak mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS

  15. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kundalyan mineral district in Afghanistan: Chapter H in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kundalyan mineral district, which has porphyry copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  16. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Herat mineral district in Afghanistan: Chapter T in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Herat mineral district, which has barium and limestone deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  17. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Badakhshan mineral district in Afghanistan: Chapter F in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Badakhshan mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products

  18. Creating an Immersive Mars Experience Using Unity3D

    Science.gov (United States)

    Miles, Sarah

    2011-01-01

    Between the two Mars Exploration Rovers, Spirit and Opportunity, NASA has collected over 280,000 images while studying the Martian surface. This number will continue to grow, with Opportunity continuing to send images and with another rover, Curiosity, launching soon. Using data collected by and for these Mars rovers, I am contributing to the creation of virtual experiences that will expose the general public to Mars. These experiences not only work to increase public knowledge, but they attempt to do so in an engaging manner more conducive to knowledge retention by letting others view Mars through the rovers' eyes. My contributions include supporting image viewing (for example, allowing users to click on panoramic images of the Martian surface to access closer range photos) as well as enabling tagging of points of interest. By creating a more interactive way of viewing the information we have about Mars, we are not just educating the public about a neighboring planet. We are showing the importance of doing such research.

  19. Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This DS consists of the locally enhanced ALOS image mosaics for each of the 24 mineral project areas (referred to herein as areas of interest), whose locality names, locations, and main mineral occurrences are shown on the index map of Afghanistan (fig. 1). ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency, but the image processing has altered the original pixel structure and all image values of the JAXA

  20. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the South Helmand mineral district in Afghanistan: Chapter O in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Helmand mineral district, which has travertine deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  1. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the North Takhar mineral district in Afghanistan: Chapter D in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Takhar mineral district, which has placer gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  2. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kunduz mineral district in Afghanistan: Chapter S in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kunduz mineral district, which has celestite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the

  3. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Dudkash mineral district in Afghanistan: Chapter R in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dudkash mineral district, which has industrial mineral deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS

  4. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Tourmaline mineral district in Afghanistan: Chapter J in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Tourmaline mineral district, which has tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products

  5. Mars Hand Lens Imager (MAHLI) Efforts and Observations at the Rocknest Eolian Sand Shadow in Curiosity's Gale Crater Field Site

    Science.gov (United States)

    Edgett, K. S.; Yingst, R. A.; Minitti, M. E.; Goetz, W.; Kah, L. C.; Kennedy, M. R.; Lipkaman, L. J.; Jensen, E. H.; Anderson, R. C.; Beegle, L. W.; hide

    2013-01-01

    The Mars Science Laboratory (MSL) mission is focused on assessing the past or present habitability of Mars, through interrogation of environment and environmental records at the Curiosity rover field site in Gale crater. The MSL team has two methods available to collect, process and deliver samples to onboard analytical laboratories, the Chemistry and Mineralogy instrument (CheMin) and the Sample Analysis at Mars (SAM) instrument suite. One approach obtains samples by drilling into a rock, the other uses a scoop to collect loose regolith fines. Scooping was planned to be first method performed on Mars because materials could be readily scooped multiple times and used to remove any remaining, minute terrestrial contaminants from the sample processing system, the Collection and Handling for In-Situ Martian Rock Analysis (CHIMRA). Because of this cleaning effort, the ideal first material to be scooped would consist of fine to very fine sand, like the interior of the Serpent Dune studied by the Mars Exploration Rover (MER) Spirit team in 2004 [1]. The MSL team selected a linear eolian deposit in the lee of a group of cobbles they named Rocknest (Fig. 1) as likely to be similar to Serpent Dune. Following the definitions in Chapter 13 of Bagnold [2], the deposit is termed a sand shadow. The scooping campaign occurred over approximately 6 weeks in October and November 2012. To support these activities, the Mars Hand Lens Imager (MAHLI) acquired images for engineering support/assessment and scientific inquiry.

  6. Color image analysis technique for measuring of fat in meat: an application for the meat industry

    Science.gov (United States)

    Ballerini, Lucia; Hogberg, Anders; Lundstrom, Kerstin; Borgefors, Gunilla

    2001-04-01

    Intramuscular fat content in meat influences some important meat quality characteristics. The aim of the present study was to develop and apply image processing techniques to quantify intramuscular fat content in beefs together with the visual appearance of fat in meat (marbling). Color images of M. longissimus dorsi meat samples with a variability of intramuscular fat content and marbling were captured. Image analysis software was specially developed for the interpretation of these images. In particular, a segmentation algorithm (i.e. classification of different substances: fat, muscle and connective tissue) was optimized in order to obtain a proper classification and perform subsequent analysis. Segmentation of muscle from fat was achieved based on their characteristics in the 3D color space, and on the intrinsic fuzzy nature of these structures. The method is fully automatic and it combines a fuzzy clustering algorithm, the Fuzzy c-Means Algorithm, with a Genetic Algorithm. The percentages of various colors (i.e. substances) within the sample are then determined; the number, size distribution, and spatial distributions of the extracted fat flecks are measured. Measurements are correlated with chemical and sensory properties. Results so far show that advanced image analysis is useful for quantify the visual appearance of meat.

  7. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghazni1 mineral district in Afghanistan: Chapter DD in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni1 mineral district, which has spectral reflectance anomalies indicative of clay, aluminum, gold, silver, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such

  8. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghazni2 mineral district in Afghanistan: Chapter EE in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni2 mineral district, which has spectral reflectance anomalies indicative of gold, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image

  9. Automated segmentation of geographic atrophy of the retinal epithelium via random forests in AREDS color fundus images.

    Science.gov (United States)

    Feeny, Albert K; Tadarati, Mongkol; Freund, David E; Bressler, Neil M; Burlina, Philippe

    2015-10-01

    Age-related macular degeneration (AMD), left untreated, is the leading cause of vision loss in people older than 55. Severe central vision loss occurs in the advanced stage of the disease, characterized by either the in growth of choroidal neovascularization (CNV), termed the "wet" form, or by geographic atrophy (GA) of the retinal pigment epithelium (RPE) involving the center of the macula, termed the "dry" form. Tracking the change in GA area over time is important since it allows for the characterization of the effectiveness of GA treatments. Tracking GA evolution can be achieved by physicians performing manual delineation of GA area on retinal fundus images. However, manual GA delineation is time-consuming and subject to inter-and intra-observer variability. We have developed a fully automated GA segmentation algorithm in color fundus images that uses a supervised machine learning approach employing a random forest classifier. This algorithm is developed and tested using a dataset of images from the NIH-sponsored Age Related Eye Disease Study (AREDS). GA segmentation output was compared against a manual delineation by a retina specialist. Using 143 color fundus images from 55 different patient eyes, our algorithm achieved PPV of 0.82±0.19, and NPV of 0:95±0.07. This is the first study, to our knowledge, applying machine learning methods to GA segmentation on color fundus images and using AREDS imagery for testing. These preliminary results show promising evidence that machine learning methods may have utility in automated characterization of GA from color fundus images. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. FITS Liberator: Image processing software

    Science.gov (United States)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  11. Preliminary Geological Map of the Peace Vallis Fan Integrated with In Situ Mosaics From the Curiosity Rover, Gale Crater, Mars

    Science.gov (United States)

    Sumner, D. Y.; Palucis, M.; Dietrich, B.; Calef, F.; Stack, K. M.; Ehlmann, B.; Bridges, J.; Dromart, J.; Eigenbrode, J.; Farmer, J.; hide

    2013-01-01

    A geomorphically defined alluvial fan extends from Peace Vallis on the NW wall of Gale Crater, Mars into the Mars Science Laboratory (MSL) Curiosity rover landing ellipse. Prior to landing, the MSL team mapped the ellipse and surrounding areas, including the Peace Vallis fan. Map relationships suggest that bedded rocks east of the landing site are likely associated with the fan, which led to the decision to send Curiosity east. Curiosity's mast camera (Mastcam) color images are being used to refine local map relationships. Results from regional mapping and the first 100 sols of the mission demonstrate that the area has a rich geological history. Understanding this history will be critical for assessing ancient habitability and potential organic matter preservation at Gale Crater.

  12. Mars for Earthlings: an analog approach to Mars in undergraduate education.

    Science.gov (United States)

    Chan, Marjorie; Kahmann-Robinson, Julia

    2014-01-01

    Mars for Earthlings (MFE) is a terrestrial Earth analog pedagogical approach to teaching undergraduate geology, planetary science, and astrobiology. MFE utilizes Earth analogs to teach Mars planetary concepts, with a foundational backbone in Earth science principles. The field of planetary science is rapidly changing with new technologies and higher-resolution data sets. Thus, it is increasingly important to understand geological concepts and processes for interpreting Mars data. MFE curriculum is topically driven to facilitate easy integration of content into new or existing courses. The Earth-Mars systems approach explores planetary origins, Mars missions, rocks and minerals, active driving forces/tectonics, surface sculpting processes, astrobiology, future explorations, and hot topics in an inquiry-driven environment. Curriculum leverages heavily upon multimedia resources, software programs such as Google Mars and JMARS, as well as NASA mission data such as THEMIS, HiRISE, CRISM, and rover images. Two years of MFE class evaluation data suggest that science literacy and general interest in Mars geology and astrobiology topics increased after participation in the MFE curriculum. Students also used newly developed skills to create a Mars mission team presentation. The MFE curriculum, learning modules, and resources are available online at http://serc.carleton.edu/marsforearthlings/index.html.

  13. Physics and psychophysics of color reproduction

    Science.gov (United States)

    Giorgianni, Edward J.

    1991-08-01

    The successful design of a color-imaging system requires knowledge of the factors used to produce and control color. This knowledge can be derived, in part, from measurements of the physical properties of the imaging system. Color itself, however, is a perceptual response and cannot be directly measured. Though the visual process begins with physics, as radiant energy reaching the eyes, it is in the mind of the observer that the stimuli produced from this radiant energy are interpreted and organized to form meaningful perceptions, including the perception of color. A comprehensive understanding of color reproduction, therefore, requires not only a knowledge of the physical properties of color-imaging systems but also an understanding of the physics, psychophysics, and psychology of the human observer. The human visual process is quite complex; in many ways the physical properties of color-imaging systems are easier to understand.

  14. Color reproduction system based on color appearance model and gamut mapping

    Science.gov (United States)

    Cheng, Fang-Hsuan; Yang, Chih-Yuan

    2000-06-01

    By the progress of computer, computer peripherals such as color monitor and printer are often used to generate color image. However, cross media color reproduction by human perception is usually different. Basically, the influence factors are device calibration and characterization, viewing condition, device gamut and human psychology. In this thesis, a color reproduction system based on color appearance model and gamut mapping is proposed. It consists of four parts; device characterization, color management technique, color appearance model and gamut mapping.

  15. MASS MOVEMENTS' DETECTION IN HIRISE IMAGES OF THE NORTH POLE OF MARS

    Directory of Open Access Journals (Sweden)

    L. Fanara

    2016-06-01

    Full Text Available We are investigating change detection techniques to automatically detect mass movements at the steep north polar scarps of Mars, in order to improve our understanding of these dynamic processes. Here we focus on movements of blocks specifically. The precise detection of such small changes requires an accurate co-registration of the images, which is achieved by ortho-rectifying them using High Resolution Imaging Science Experiment (HiRISE Digital Terrain Models (DTMs. Moreover, we deal with the challenge of deriving the true shape of the moved blocks. In a next step, these results are combined with findings based on HiRISE DTMs from different points in time in order to estimate the volume of mass movements.

  16. Hydrovolcanic features on Mars: Preliminary observations from the first Mars year of HiRISE imaging

    Science.gov (United States)

    Keszthelyi, L.P.; Jaeger, W.L.; Dundas, C.M.; Martinez-Alonso, S.; McEwen, A.S.; Milazzo, M.P.

    2010-01-01

    We provide an overview of features indicative of the interaction between water and lava and/or magma on Mars as seen by the High Resolution Imaging Science Experiment (HiRISE) camera during the Primary Science Phase of the Mars Reconnaissance Orbiter (MRO) mission. The ability to confidently resolve meter-scale features from orbit has been extremely useful in the study of the most pristine examples. In particular, HiRISE has allowed the documentation of previously undescribed features associated with phreatovolcanic cones (formed by the interaction of lava and groundwater) on rapidly emplaced flood lavas. These include "moats" and "wakes" that indicate that the lava crust was thin and mobile, respectively [Jaeger, W.L., Keszthelyi, L.P., McEwen, A.S., Dundas, C.M., Russel, P.S., 2007. Science 317, 1709-1711]. HiRISE has also discovered entablature-style jointing in lavas that is indicative of water-cooling [Milazzo, M.P., Keszthelyi, L.P., Jaeger, W.L., Rosiek, M., Mattson, S., Verba, C., Beyer, R.A., Geissler, P.E., McEwen, A.S., and the HiRISE Team, 2009. Geology 37, 171-174]. Other observations strongly support the idea of extensive volcanic mudflows (lahars). Evidence for other forms of hydrovolcanism, including glaciovolcanic interactions, is more equivocal. This is largely because most older and high-latitude terrains have been extensively modified, masking any earlier 1-10 m scale features. Much like terrestrial fieldwork, the prerequisite for making full use of HiRISE's capabilities is finding good outcrops.

  17. Color capable sub-pixel resolving optofluidic microscope and its application to blood cell imaging for malaria diagnosis.

    Directory of Open Access Journals (Sweden)

    Seung Ah Lee

    Full Text Available Miniaturization of imaging systems can significantly benefit clinical diagnosis in challenging environments, where access to physicians and good equipment can be limited. Sub-pixel resolving optofluidic microscope (SROFM offers high-resolution imaging in the form of an on-chip device, with the combination of microfluidics and inexpensive CMOS image sensors. In this work, we report on the implementation of color SROFM prototypes with a demonstrated optical resolution of 0.66 µm at their highest acuity. We applied the prototypes to perform color imaging of red blood cells (RBCs infected with Plasmodium falciparum, a particularly harmful type of malaria parasites and one of the major causes of death in the developing world.

  18. Colors of Alien Worlds from Direct Imaging Exoplanet Missions

    Science.gov (United States)

    Hu, Renyu

    2016-01-01

    Future direct-imaging exoplanet missions such as WFIRST will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These "cold" exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.

  19. An optimized color transformation for the analysis of digital images of hematoxylin & eosin stained slides

    Directory of Open Access Journals (Sweden)

    Mark D Zarella

    2015-01-01

    Full Text Available Hematoxylin and eosin (H&E staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma. By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image

  20. FUNCTIONALITY ASSESSMENT OF ALGORITHMS FOR THE COLORING OF IMAGES IN TERMS OF INCREASING RADIOMETRIC VALUES OF AERIAL PHOTOGRAPHS ARCHIVES

    Directory of Open Access Journals (Sweden)

    Ewiak Ireneusz

    2016-12-01

    Full Text Available Available on the commercial market are a number of algorithms that enable assigning to pixels of a monochrome digital image suitable colors according to a strictly defined schedule. These algorithms have been recently used by professional film studios involved in the coloring of archival productions. This article provides an overview on the functionality of coloring algorithms in terms of their use to improve the interpretation quality of historical, black - and - white aerial photographs. The analysis covered intuitive (Recolored programs, as well as more advanced (Adobe After Effect, DaVinci Resolve programs. The use of their full functionality was limited by the too large information capacity of aerial photograph images. Black - and - white historical aerial photographs, which interpretation quality in many cases does not meet the criteria posed on photogrammetric developments, require an increase of their readability. The solution in this regard may be the process of coloring images. The authors of this article conducted studies aimed to determine to what extent the tested coloring algorithms enable an automatic detection of land cover elements on historical aerial photographs and provide color close to the natural. Used in the studies were archival black - and - white aerial photographs of the western part of Warsaw district made available by the Main Centre of Geodetic and Cartographic Documentation , the selection of which was associated with the presence in this area of various elements of land cover, such as water, forests, crops, exposed soils and also anthropogenic objects. In the analysis of different algorithms are included: format and size of the image, degree of automation of the process, degree of compliance of the result and processing time. The accuracy of the coloring process was different for each class of objects mapped on the photograph. The main limitation of the coloring process created shadows of anthropogenic objects

  1. Visible Wavelength Color Filters Using Dielectric Subwavelength Gratings for Backside-Illuminated CMOS Image Sensor Technologies.

    Science.gov (United States)

    Horie, Yu; Han, Seunghoon; Lee, Jeong-Yub; Kim, Jaekwan; Kim, Yongsung; Arbabi, Amir; Shin, Changgyun; Shi, Lilong; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Lee, Hong-Seok; Hwang, Sungwoo; Faraon, Andrei

    2017-05-10

    We report transmissive color filters based on subwavelength dielectric gratings that can replace conventional dye-based color filters used in backside-illuminated CMOS image sensor (BSI CIS) technologies. The filters are patterned in an 80 nm-thick poly silicon film on a 115 nm-thick SiO 2 spacer layer. They are optimized for operating at the primary RGB colors, exhibit peak transmittance of 60-80%, and have an almost insensitive response over a ± 20° angular range. This technology enables shrinking of the pixel sizes down to near a micrometer.

  2. Colored tracks of heavy ion particles recorded on photographic color film

    International Nuclear Information System (INIS)

    Kuge, K.; Yasuda, N.; Kumagai, H.; Aoki, N.; Hasegawa, A.

    2002-01-01

    A new method to obtain the three-dimensional information on nuclear tracks was developed using color photography. Commercial color films were irradiated with ion beam and color-developed. The ion tracks were represented with color images in which different depths were indicated by different colors, and the three-dimensional information was obtained from color changes. Details of this method are reported, and advantages and limitations are discussed in comparison with a conventional method using a nuclear emulsion

  3. Determination of connected components inthe analysis of homogeneous and detail zonesin color images

    Directory of Open Access Journals (Sweden)

    Cristina Pérez-Benito

    2018-02-01

    Full Text Available A model based on local graphs to classify pixels coming from at or detail regions of an image is presented. For each pixel a local graph is dened. Its structure will depend on the similarity between neighbouring pixels. Its features allow us to classify each image pixel as belonging to one type of region or the other. This classication is an essential pre-processing technique for many Computer Vision tools, such as smoothingor sharpening of digital color images.

  4. 'TISUCROMA': A Software for Color Processing of Biological Tissue's Images

    International Nuclear Information System (INIS)

    Arista Romeu, Eduardo J.; La Rosa Vazquez, Jose Manuel de; Valor, Alma; Stolik, Suren

    2016-01-01

    In this work a software intended to plot and analyze digital image RGB histograms from normal and abnormal regions of biological tissue. The obtained RGB histograms from each zone can be used to show the image in only one color or the mixture of some of them. The Software was developed in Lab View to process the images in a laptop. Some medical application examples are shown. (Author)

  5. Fluoromodule-based reporter/probes designed for in vivo fluorescence imaging

    Science.gov (United States)

    Zhang, Ming; Chakraborty, Subhasish K.; Sampath, Padma; Rojas, Juan J.; Hou, Weizhou; Saurabh, Saumya; Thorne, Steve H.; Bruchez, Marcel P.; Waggoner, Alan S.

    2015-01-01

    Optical imaging of whole, living animals has proven to be a powerful tool in multiple areas of preclinical research and has allowed noninvasive monitoring of immune responses, tumor and pathogen growth, and treatment responses in longitudinal studies. However, fluorescence-based studies in animals are challenging because tissue absorbs and autofluoresces strongly in the visible light spectrum. These optical properties drive development and use of fluorescent labels that absorb and emit at longer wavelengths. Here, we present a far-red absorbing fluoromodule–based reporter/probe system and show that this system can be used for imaging in living mice. The probe we developed is a fluorogenic dye called SC1 that is dark in solution but highly fluorescent when bound to its cognate reporter, Mars1. The reporter/probe complex, or fluoromodule, produced peak emission near 730 nm. Mars1 was able to bind a variety of structurally similar probes that differ in color and membrane permeability. We demonstrated that a tool kit of multiple probes can be used to label extracellular and intracellular reporter–tagged receptor pools with 2 colors. Imaging studies may benefit from this far-red excited reporter/probe system, which features tight coupling between probe fluorescence and reporter binding and offers the option of using an expandable family of fluorogenic probes with a single reporter gene. PMID:26348895

  6. Road Extraction and Car Detection from Aerial Image Using Intensity and Color

    Directory of Open Access Journals (Sweden)

    Vahid Ghods

    2011-07-01

    Full Text Available In this paper a new automatic approach to road extraction from aerial images is proposed. The initialization strategies are based on the intensity, color, and Hough transform. After road elements extraction, chain codes are calculated. In the last step, using shadow, cars on the roads are detected. We implemented our method on the 25 images from "Google Earth" database. The experiments show an increase in both the completeness and the quality indexes for the extracted road.

  7. RGB color coded images in scanning electron microscopy of biological surfaces

    Czech Academy of Sciences Publication Activity Database

    Kofroňová, Olga; Benada, Oldřich

    2017-01-01

    Roč. 61, č. 3 (2017), s. 349-352 ISSN 0001-723X R&D Projects: GA MŠk(CZ) LO1509; GA ČR(CZ) GA16-20229S Institutional support: RVO:61388971 Keywords : Biological surfaces * Color image s * Scanning electron microscopy Subject RIV: EE - Microbiology, Virology OBOR OECD: Microbiology Impact factor: 0.673, year: 2016

  8. Lunar and Planetary Science XXXV: Mars: Remote Sensing and Terrestrial Analogs

    Science.gov (United States)

    2004-01-01

    The session "Mars: Remote Sensing and Terrestrial Analogs" included the following:Physical Meaning of the Hapke Parameter for Macroscopic Roughness: Experimental Determination for Planetary Regolith Surface Analogs and Numerical Approach; Near-Infrared Spectra of Martian Pyroxene Separates: First Results from Mars Spectroscopy Consortium; Anomalous Spectra of High-Ca Pyroxenes: Correlation Between Ir and M ssbauer Patterns; THEMIS-IR Emissivity Spectrum of a Large Dark Streak near Olympus Mons; Geomorphologic/Thermophysical Mapping of the Athabasca Region, Mars, Using THEMIS Infrared Imaging; Mars Thermal Inertia from THEMIS Data; Multispectral Analysis Methods for Mapping Aqueous Mineral Depostis in Proposed Paleolake Basins on Mars Using THEMIS Data; Joint Analysis of Mars Odyssey THEMIS Visible and Infrared Images: A Magic Airbrush for Qualitative and Quantitative Morphology; Analysis of Mars Thermal Emission Spectrometer Data Using Large Mineral Reference Libraries ; Negative Abundance : A Problem in Compositional Modeling of Hyperspectral Images; Mars-LAB: First Remote Sensing Data of Mineralogy Exposed at Small Mars-Analog Craters, Nevada Test Site; A Tool for the 2003 Rover Mini-TES: Downwelling Radiance Compensation Using Integrated Line-Sight Sky Measurements; Learning About Mars Geology Using Thermal Infrared Spectral Imaging: Orbiter and Rover Perspectives; Classifying Terrestrial Volcanic Alteration Processes and Defining Alteration Processes they Represent on Mars; Cemented Volcanic Soils, Martian Spectra and Implications for the Martian Climate; Palagonitic Mars: A Basalt Centric View of Surface Composition and Aqueous Alteration; Combining a Non Linear Unmixing Model and the Tetracorder Algorithm: Application to the ISM Dataset; Spectral Reflectance Properties of Some Basaltic Weathering Products; Morphometric LIDAR Analysis of Amboy Crater, California: Application to MOLA Analysis of Analog Features on Mars; Airborne Radar Study of Soil Moisture at

  9. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques: One year on with a focus on auto-DTM, auto-coregistration and citizen science.

    Science.gov (United States)

    Muller, Jan-Peter; Sidiropoulos, Panagiotis; Yershov, Vladimir; Gwinner, Klaus; van Gasselt, Stephan; Walter, Sebastian; Ivanov, Anton; Morley, Jeremy; Sprinks, James; Houghton, Robert; Bamford, Stephen; Kim, Jung-Rack

    2015-04-01

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10cm) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as impact craters, RSLs, CO2 geysers, gullies, boulder movements and a host of ice-related phenomena). Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004 the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25m nadir images) with 98% coverage with images ≤100m and more than 70% useful for stereo mapping (e.g. atmosphere sufficiently clear). It has been demonstrated [Gwinner et al., 2010] that HRSC has the highest possible planimetric accuracy of ≤25m and is well co-registered with MOLA, which represents the global 3D reference frame. HRSC 3D and terrain-corrected image products therefore represent the best available 3D reference data for Mars. Recently [Gwinner et al., 2015] have shown the ability to generate mosaiced DTM and BRDF-corrected surface reflectance maps. NASA began imaging the surface of Mars, initially from flybys in the 1960s with the first orbiter with images ≤100m in the late 1970s from Viking Orbiter. The most recent orbiter to begin imaging in November 2006 is the NASA MRO which has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈25cm) and ≈5% from CTX (≈6m) in stereo. Unfortunately, for most of these NASA images, especially MGS, MO, VO and HiRISE their accuracy of georeferencing is often worse than the quality of Mars reference data from HRSC. This reduces their value for analysing changes in time

  10. Colors in mind: a novel paradigm to investigate pure color imagery.

    Science.gov (United States)

    Wantz, Andrea L; Borst, Grégoire; Mast, Fred W; Lobmaier, Janek S

    2015-07-01

    Mental color imagery abilities are commonly measured using paradigms that involve naming, judging, or comparing the colors of visual mental images of well-known objects (e.g., "Is a sunflower darker yellow than a lemon"?). Although this approach is widely used in patient studies, differences in the ability to perform such color comparisons might simply reflect participants' general knowledge of object colors rather than their ability to generate accurate visual mental images of the colors of the objects. The aim of the present study was to design a new color imagery paradigm. Participants were asked to visualize a color for 3 s and then to determine a visually presented color by pressing 1 of 6 keys. We reasoned that participants would react faster when the imagined and perceived colors were congruent than when they were incongruent. In Experiment 1, participants were slower in incongruent than congruent trials but only when they were instructed to visualize the colors. The results in Experiment 2 demonstrate that the congruency effect reported in Experiment 1 cannot be attributed to verbalization of the color that had to be visualized. Finally, in Experiment 3, the congruency effect evoked by mental imagery correlated with performance in a perceptual version of the task. We discuss these findings with respect to the mechanisms that underlie mental imagery and patients suffering from color imagery deficits. (c) 2015 APA, all rights reserved.

  11. Shaded Relief and Radar Image with Color as Height, Madrid, Spain

    Science.gov (United States)

    2002-01-01

    The white, mottled area in the right-center of this image from NASA's Shuttle Radar Topography Mission (SRTM) is Madrid, the capital of Spain. Located on the Meseta Central, a vast plateau covering about 40 percent of the country, this city of 3 million is very near the exact geographic center of the Iberian Peninsula. The Meseta is rimmed by mountains and slopes gently to the west and to the series of rivers that form the boundary with Portugal. The plateau is mostly covered with dry grasslands, olive groves and forested hills.Madrid is situated in the middle of the Meseta, and at an elevation of 646 meters (2,119 feet) above sea level is the highest capital city in Europe. To the northwest of Madrid, and visible in the upper left of the image, is the Sistema Central mountain chain that forms the 'dorsal spine' of the Meseta and divides it into northern and southern subregions. Rising to about 2,500 meters (8,200 feet), these mountains display some glacial features and are snow-capped for most of the year. Offering almost year-round winter sports, the mountains are also important to the climate of Madrid.Three visualization methods were combined to produce this image: shading and color coding of topographic height and radar image intensity. The shade image was derived by computing topographic slope in the northwest-southeast direction. North-facing slopes appear bright and south-facing slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and brown to white at the highest elevations. The shade image was combined with the radar intensity image in the flat areas.Elevation data used in this image was acquired by the SRTM aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to

  12. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Panjsher Valley mineral district in Afghanistan: Chapter M in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Panjsher Valley mineral district, which has emerald and silver-iron deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2009, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from

  13. SU-F-T-427: Utilization and Evaluation of Diagnostic CT Imaging with MAR Technique for Radiation Therapy Treatment Planning

    International Nuclear Information System (INIS)

    Xu, M; Foster, R; Parks, H; Pankuch, M

    2016-01-01

    Purpose: The objective was to utilize and evaluate diagnostic CT-MAR technique for radiation therapy treatment planning. Methods: A Toshiba-diagnostic-CT acquisition with SEMAR(Single-energy-MAR)-algorism was performed to make the metal-artifact-reduction (MAR) for patient treatment planning. CT-imaging datasets with and without SEMAR were taken on a Catphan-phantom. Two sets of CT-numbers were calibrated with the relative electron densities (RED). A tissue characterization phantom with Gammex various simulating material rods was used to establish the relationship between known REDs and corresponding CT-numbers. A GE-CT-sim acquisition was taken on the Catphan for comparison. A patient with bilateral hip arthroplasty was scanned in the radiotherapy CT-sim and the diagnostic SEMAR-CT on a flat panel. The derived SEMAR images were used as a primary CT dataset to create contours for the target, critical-structures, and for planning. A deformable registration was performed with VelocityAI to track voxel changes between SEMAR and CT-sim images. The SEMAR-CT images with minimal artifacts and high quality of geometrical and spatial integrity were employed for a treatment plan. Treatment-plans were evaluated based on deformable registration of SEMAR-CT and CT-sim dataset with assigned CT-numbers in the metal artifact regions in Eclipse v11 TPS. Results: The RED and CT-number relationships were consistent for the datasets in CT-sim and CT’s with and without SEMAR. SEMAR datasets with high image quality were used for PTV and organ delineation in the treatment planning process. For dose distribution to the PTV through the DVH analysis, the plan using CT-sim with the assigned CT-number showed a good agreement to those on deformable CT-SEMAR. Conclusion: A diagnostic-CT with MAR-algorithm can be utilized for radiotherapy treatment planning with CT-number calibrated to the RED. Treatment planning comparison and DVH shows a good agreement in the PTV and critical organs between

  14. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Haji-Gak mineral district in Afghanistan: Chapter C in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Haji-Gak mineral district, which has iron ore deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products

  15. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kharnak-Kanjar mineral district in Afghanistan: Chapter K in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kharnak-Kanjar mineral district, which has mercury deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  16. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Dusar-Shaida mineral district in Afghanistan: Chapter I in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dusar-Shaida mineral district, which has copper and tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the

  17. First Grinding of a Rock on Mars

    Science.gov (United States)

    2004-01-01

    The round, shallow depression in this image resulted from history's first grinding of a rock on Mars. The rock abrasion tool on NASA's Spirit rover ground off the surface of a patch 45.5 millimeters (1.8 inches) in diameter on a rock called Adirondack during Spirit's 34th sol on Mars, Feb. 6, 2004. The hole is 2.65 millimeters (0.1 inch) deep, exposing fresh interior material of the rock for close inspection with the rover's microscopic imager and two spectrometers on the robotic arm. This image was taken by Spirit's panoramic camera, providing a quick visual check of the success of the grinding. The rock abrasion tools on both Mars Exploration Rovers were supplied by Honeybee Robotics, New York, N.Y.

  18. Assessment of color parameters of composite resin shade guides using digital imaging versus colorimeter.

    Science.gov (United States)

    Yamanel, Kivanc; Caglar, Alper; Özcan, Mutlu; Gulsah, Kamran; Bagis, Bora

    2010-12-01

    This study evaluated the color parameters of resin composite shade guides determined using a colorimeter and digital imaging method. Four composite shade guides, namely: two nanohybrid (Grandio [Voco GmbH, Cuxhaven, Germany]; Premise [KerrHawe SA, Bioggio, Switzerland]) and two hybrid (Charisma [Heraeus Kulzer, GmbH & Co. KG, Hanau, Germany]; Filtek Z250 [3M ESPE, Seefeld, Germany]) were evaluated. Ten shade tabs were selected (A1, A2, A3, A3,5, A4, B1, B2, B3, C2, C3) from each shade guide. CIE Lab values were obtained using digital imaging and a colorimeter (ShadeEye NCC Dental Chroma Meter, Shofu Inc., Kyoto, Japan). The data were analyzed using two-way analysis of variance and Bonferroni post hoc test. Overall, the mean ΔE values from different composite pairs demonstrated statistically significant differences when evaluated with the colorimeter (p 6.8). For all shade pairs evaluated, the most significant shade mismatches were obtained between Grandio-Filtek Z250 (p = 0.021) and Filtek Z250-Premise (p = 0.01) regarding ΔE mean values, whereas the best shade match was between Grandio-Charisma (p = 0.255) regardless of the measurement method. The best color match (mean ΔE values) was recorded for A1, A2, and A3 shade pairs in both methods. When proper object-camera distance, digital camera settings, and suitable illumination conditions are provided, digital imaging method could be used in the assessment of color parameters. Interchanging use of shade guides from different composite systems should be avoided during color selection. © 2010, COPYRIGHT THE AUTHORS. JOURNAL COMPILATION © 2010, WILEY PERIODICALS, INC.

  19. Restoration of color images degraded by space-variant motion blur

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Flusser, Jan

    2007-01-01

    Roč. 2007, č. 4673 (2007), s. 450-457 ISSN 0302-9743. [Computer Analysis of Images and Patterns. Vienna, 27.08.2007-29.08.2007] R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : deblurring * space-variant restoration * motion blur * color Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.402, year: 2005 http://dx.doi.org/10.1007/978-3-540-74272-2_56

  20. Voxel-based model construction from colored tomographic images

    International Nuclear Information System (INIS)

    Loureiro, Eduardo Cesar de Miranda

    2002-07-01

    This work presents a new approach in the construction of voxel-based phantoms that was implemented to simplify the segmentation process of organs and tissues reducing the time used in this procedure. The segmentation process is performed by painting tomographic images and attributing a different color for each organ or tissue. A voxel-based head and neck phantom was built using this new approach. The way as the data are stored allows an increasing in the performance of the radiation transport code. The program that calculates the radiation transport also works with image files. This capability allows image reconstruction showing isodose areas, under several points of view, increasing the information to the user. Virtual X-ray photographs can also be obtained allowing that studies could be accomplished looking for the radiographic techniques optimization assessing, at the same time, the doses in organs and tissues. The accuracy of the program here presented, called MCvoxEL, that implements this new approach, was tested by comparison to results from two modern and well-supported Monte Carlo codes. Dose conversion factors for parallel X-ray exposure were also calculated. (author)

  1. Qualitative evaluations and comparisons of six night-vision colorization methods

    Science.gov (United States)

    Zheng, Yufeng; Reese, Kristopher; Blasch, Erik; McManamon, Paul

    2013-05-01

    Current multispectral night vision (NV) colorization techniques can manipulate images to produce colorized images that closely resemble natural scenes. The colorized NV images can enhance human perception by improving observer object classification and reaction times especially for low light conditions. This paper focuses on the qualitative (subjective) evaluations and comparisons of six NV colorization methods. The multispectral images include visible (Red-Green- Blue), near infrared (NIR), and long wave infrared (LWIR) images. The six colorization methods are channel-based color fusion (CBCF), statistic matching (SM), histogram matching (HM), joint-histogram matching (JHM), statistic matching then joint-histogram matching (SM-JHM), and the lookup table (LUT). Four categries of quality measurements are used for the qualitative evaluations, which are contrast, detail, colorfulness, and overall quality. The score of each measurement is rated from 1 to 3 scale to represent low, average, and high quality, respectively. Specifically, high contrast (of rated score 3) means an adequate level of brightness and contrast. The high detail represents high clarity of detailed contents while maintaining low artifacts. The high colorfulness preserves more natural colors (i.e., closely resembles the daylight image). Overall quality is determined from the NV image compared to the reference image. Nine sets of multispectral NV images were used in our experiments. For each set, the six colorized NV images (produced from NIR and LWIR images) are concurrently presented to users along with the reference color (RGB) image (taken at daytime). A total of 67 subjects passed a screening test ("Ishihara Color Blindness Test") and were asked to evaluate the 9-set colorized images. The experimental results showed the quality order of colorization methods from the best to the worst: CBCF colorization and for quantitative evaluation using an objective metric such as objective evaluation index

  2. The Effect of Gamma and Chroma on the Perception of Color Images

    NARCIS (Netherlands)

    Dijk, J.; Verbeek, P.W.; Walraven, J.; Young, I.T.

    2002-01-01

    We present the results of experiments in which we manipulated color images in the CIELAB space by first applying a scaling factor on chroma (C*). After this we applied a gamma transformation (an exponent relating the input to the output) to the luminance (Y) in XYZ space, while keeping the

  3. Flux and color variations of the quadruply imaged quasar HE 0435-1223

    DEFF Research Database (Denmark)

    Ricci, D.; Poels, J.; Elyiv, A.

    2011-01-01

    Aims: We present VRi photometric observations of the quadruply imaged quasarHE0435-1223, carried out with the Danish 1.54 m telescope at the La Silla Observatory. Our aim was to monitor and study the magnitudes and colors of each lensed component as a function of time. Methods. We monitored...

  4. Color digital halftoning taking colorimetric color reproduction into account

    Science.gov (United States)

    Haneishi, Hideaki; Suzuki, Toshiaki; Shimoyama, Nobukatsu; Miyake, Yoichi

    1996-01-01

    Taking colorimetric color reproduction into account, the conventional error diffusion method is modified for color digital half-toning. Assuming that the input to a bilevel color printer is given in CIE-XYZ tristimulus values or CIE-LAB values instead of the more conventional RGB or YMC values, two modified versions based on vector operation in (1) the XYZ color space and (2) the LAB color space were tested. Experimental results show that the modified methods, especially the method using the LAB color space, resulted in better color reproduction performance than the conventional methods. Spatial artifacts that appear in the modified methods are presented and analyzed. It is also shown that the modified method (2) with a thresholding technique achieves a good spatial image quality.

  5. Towards representation of a perceptual color manifold using associative memory for color constancy.

    Science.gov (United States)

    Seow, Ming-Jung; Asari, Vijayan K

    2009-01-01

    In this paper, we propose the concept of a manifold of color perception through empirical observation that the center-surround properties of images in a perceptually similar environment define a manifold in the high dimensional space. Such a manifold representation can be learned using a novel recurrent neural network based learning algorithm. Unlike the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete locations in the state space, the dynamics of the proposed learning algorithm represent memory as a nonlinear line of attraction. The region of convergence around the nonlinear line is defined by the statistical characteristics of the training data. This learned manifold can then be used as a basis for color correction of the images having different color perception to the learned color perception. Experimental results show that the proposed recurrent neural network learning algorithm is capable of color balance the lighting variations in images captured in different environments successfully.

  6. Color constancy in dermatoscopy with smartphone

    Science.gov (United States)

    Cugmas, Blaž; Pernuš, Franjo; Likar, Boštjan

    2017-12-01

    The recent spread of cheap dermatoscopes for smartphones can empower patients to acquire images of skin lesions on their own and send them to dermatologists. Since images are acquired by different smartphone cameras under unique illumination conditions, the variability in colors is expected. Therefore, the mobile dermatoscopic systems should be calibrated in order to ensure the color constancy in skin images. In this study, we have tested a dermatoscope DermLite DL1 basic, attached to Samsung Galaxy S4 smartphone. Under the controlled conditions, jpeg images of standard color patches were acquired and a model between an unknown device-dependent RGB and a deviceindependent Lab color space has been built. Results showed that median and the best color error was 7.77 and 3.94, respectively. Results are in the range of a human eye detection capability (color error ≈ 4) and video and printing industry standards (color error is expected to be between 5 and 6). It can be concluded that a calibrated smartphone dermatoscope can provide sufficient color constancy and can serve as an interesting opportunity to bring dermatologists closer to the patients.

  7. Thermal behavior and ice-table depth within the north polar erg of Mars

    Science.gov (United States)

    Putzig, Nathaniel E.; Mellon, Michael T.; Herkenhoff, Kenneth E.; Phillips, Roger J.; Davis, Brian J.; Ewer, Kenneth J.; Bowers, Lauren M.

    2014-01-01

    We fully resolve a long-standing thermal discrepancy concerning the north polar erg of Mars. Several recent studies have shown that the erg’s thermal properties are consistent with normal basaltic sand overlying shallow ground ice or ice-cemented sand. Our findings bolster that conclusion by thoroughly characterizing the thermal behavior of the erg, demonstrating that other likely forms of physical heterogeneity play only a minor role, and obviating the need to invoke exotic materials. Thermal inertia as calculated from orbital temperature observations of the dunes has previously been found to be more consistent with dust-sized materials than with sand. Since theory and laboratory data show that dunes will only form out of sand-sized particles, exotic sand-sized agglomerations of dust have been invoked to explain the low values of thermal inertia. However, the polar dunes exhibit the same darker appearance and color as that of dunes found elsewhere on the planet that have thermal inertia consistent with normal sand-sized basaltic grains, whereas Martian dust deposits are generally lighter and redder. The alternative explanation for the discrepancy as a thermal effect of a shallow ice table is supported by our analysis of observations from the Mars Global Surveyor Thermal Emission Spectrometer and the Mars Odyssey Thermal Emission Imaging System and by forward modeling of physical heterogeneity. In addition, our results exclude a uniform composition of dark dust-sized materials, and they show that the thermal effects of the dune slopes and bright interdune materials evident in high-resolution images cannot account for the erg’s thermal behavior.

  8. Thermal behavior and ice-table depth within the north polar erg of Mars

    Science.gov (United States)

    Putzig, Nathaniel E.; Mellon, Michael T.; Herkenhoff, Kenneth E.; Phillips, Roger J.; Davis, Brian J.; Ewer, Kenneth J.; Bowers, Lauren M.

    2014-02-01

    We fully resolve a long-standing thermal discrepancy concerning the north polar erg of Mars. Several recent studies have shown that the erg's thermal properties are consistent with normal basaltic sand overlying shallow ground ice or ice-cemented sand. Our findings bolster that conclusion by thoroughly characterizing the thermal behavior of the erg, demonstrating that other likely forms of physical heterogeneity play only a minor role, and obviating the need to invoke exotic materials. Thermal inertia as calculated from orbital temperature observations of the dunes has previously been found to be more consistent with dust-sized materials than with sand. Since theory and laboratory data show that dunes will only form out of sand-sized particles, exotic sand-sized agglomerations of dust have been invoked to explain the low values of thermal inertia. However, the polar dunes exhibit the same darker appearance and color as that of dunes found elsewhere on the planet that have thermal inertia consistent with normal sand-sized basaltic grains, whereas Martian dust deposits are generally lighter and redder. The alternative explanation for the discrepancy as a thermal effect of a shallow ice table is supported by our analysis of observations from the Mars Global Surveyor Thermal Emission Spectrometer and the Mars Odyssey Thermal Emission Imaging System and by forward modeling of physical heterogeneity. In addition, our results exclude a uniform composition of dark dust-sized materials, and they show that the thermal effects of the dune slopes and bright interdune materials evident in high-resolution images cannot account for the erg's thermal behavior.

  9. The Mars Science Laboratory Curiosity rover Mastcam instruments: Preflight and in-flight calibration, validation, and data archiving

    Science.gov (United States)

    Bell, J. F.; Godber, A.; McNair, S.; Caplinger, M. A.; Maki, J. N.; Lemmon, M. T.; Van Beek, J.; Malin, M. C.; Wellington, D.; Kinch, K. M.; Madsen, M. B.; Hardgrove, C.; Ravine, M. A.; Jensen, E.; Harker, D.; Anderson, R. B.; Herkenhoff, K. E.; Morris, R. V.; Cisneros, E.; Deen, R. G.

    2017-07-01

    The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted 2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) "true color" images, multispectral images in nine additional bands spanning 400-1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration.

  10. An optimized digital watermarking algorithm in wavelet domain based on differential evolution for color image.

    Science.gov (United States)

    Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai

    2018-01-01

    In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.

  11. DAF: differential ACE filtering image quality assessment by automatic color equalization

    Science.gov (United States)

    Ouni, S.; Chambah, M.; Saint-Jean, C.; Rizzi, A.

    2008-01-01

    Ideally, a quality assessment system would perceive and measure image or video impairments just like a human being. But in reality, objective quality metrics do not necessarily correlate well with perceived quality [1]. Plus, some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their usage in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation is the most used and most efficient approach up to now. But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements [2,3]. Thus, reliable automatic methods for visual quality assessment are needed in the field of digital film restoration. The ACE method, for Automatic Color Equalization [4,6], is an algorithm for digital images unsupervised enhancement. It is based on a new computational approach that tries to model the perceptual response of our vision system merging the Gray World and White Patch equalization mechanisms in a global and local way. Like our vision system ACE is able to adapt to widely varying lighting conditions, and to extract visual information from the environment efficaciously. Moreover ACE can be run in an unsupervised manner. Hence it is very useful as a digital film restoration tool since no a priori information is available. In this paper we deepen the investigation of using the ACE algorithm as a basis for a reference free image quality evaluation. This new metric called DAF for Differential ACE Filtering [7] is an objective quality measure that can be used in several image restoration and image quality assessment systems. In this paper, we compare on different image databases, the results obtained with DAF and with some subjective image quality assessments (Mean Opinion Score MOS as measure of perceived image quality). We study also the correlation between objective measure and MOS. In our experiments, we have used for the first image

  12. Using Color and Grayscale Images to Teach Histology to Color-Deficient Medical Students

    Science.gov (United States)

    Rubin, Lindsay R.; Lackey, Wendy L.; Kennedy, Frances A.; Stephenson, Robert B.

    2009-01-01

    Examination of histologic and histopathologic microscopic sections relies upon differential colors provided by staining techniques, such as hematoxylin and eosin, to delineate normal tissue components and to identify pathologic alterations in these components. Given the prevalence of color deficiency (commonly called "color blindness")…

  13. Scientific Payload Of The Emirates Mars Mission: Emirates Mars Infrared Spectrometer (Emirs) Overview.

    Science.gov (United States)

    Altunaiji, E. S.; Edwards, C. S.; Christensen, P. R.; Smith, M. D.; Badri, K. M., Sr.

    2017-12-01

    The Emirates Mars Mission (EMM) will launch in 2020 to explore the dynamics in the atmosphere of Mars on a global scale. EMM has three scientific instruments to an improved understanding of circulation and weather in the Martian lower and middle atmosphere. Two of the EMM's instruments, which are the Emirates eXploration Imager (EXI) and Emirates Mars Infrared Spectrometer (EMIRS) will focus on the lower atmosphere observing dust, ice clouds, water vapor and ozone. On the other hand, the third instrument Emirates Mars Ultraviolet Spectrometer (EMUS) will focus on both the thermosphere of the planet and its exosphere. The EMIRS instrument, shown in Figure 1, is an interferometric thermal infrared spectrometer that is jointly developed by Arizona State University (ASU) and Mohammed Bin Rashid Space Centre (MBRSC). It builds on a long heritage of thermal infrared spectrometers designed, built, and managed, by ASU's Mars Space Flight Facility, including the Thermal Emission Spectrometer (TES), Miniature Thermal Emission Spectrometer (Mini-TES), and the OSIRIS-REx Thermal Emission Spectrometer (OTES). EMIRS operates in the 6-40+ µm range with 5 cm-1 spectral sampling, enabled by a Chemical Vapor-Deposited (CVD) diamond beamsplitter and state of the art electronics. This instrument utilizes a 3×3 detector array and a scan mirror to make high-precision infrared radiance measurements over most of a Martian hemisphere. The EMIRS instrument is optimized to capture the integrated, lower-middle atmosphere dynamics over a Martian hemisphere and will capture 60 global images per week ( 20 images per orbit) at a resolution of 100-300 km/pixel. After processing through an atmospheric retrieval algorithm, EMIRS will determine the vertical temperature profiles to 50km altitude and measure the column integrated global distribution and abundances of key atmospheric parameters (e.g. dust, water ice (clouds) and water vapor) over the Martian day, seasons and year.

  14. Elemental Composition of Mars Return Samples Using X-Ray Fluorescence Imaging at the National Synchrotron Light Source II

    Science.gov (United States)

    Thieme, J.; Hurowitz, J. A.; Schoonen, M. A.; Fogelqvist, E.; Gregerson, J.; Farley, K. A.; Sherman, S.; Hill, J.

    2018-04-01

    NSLS-II at BNL provides a unique and critical capability to perform assessments of the elemental composition and the chemical state of Mars returned samples using synchrotron radiation X-ray fluorescence imaging and X-ray absorption spectroscopy.

  15. S3-2: Colorfulness Perception Adapting to Natural Scenes

    Directory of Open Access Journals (Sweden)

    Yoko Mizokami

    2012-10-01

    Full Text Available Our visual system has the ability to adapt to the color characteristics of environment and maintain stable color appearance. Many researches on chromatic adaptation and color constancy suggested that the different levels of visual processes involve the adaptation mechanism. In the case of colorfulness perception, it has been shown that the perception changes with adaptation to chromatic contrast modulation and to surrounding chromatic variance. However, it is still not clear how the perception changes in natural scenes and what levels of visual mechanisms contribute to the perception. Here, I will mainly present our recent work on colorfulness-adaptation in natural images. In the experiment, we examined whether the colorfulness perception of an image was influenced by the adaptation to natural images with different degrees of saturation. Natural and unnatural (shuffled or phase-scrambled images are used for adapting and test images, and all combinations of adapting and test images were tested (e.g., the combination of natural adapting images and a shuffled test image. The results show that colorfulness perception was influenced by adaptation to the saturation of images. A test image appeared less colorful after adaptation to saturated images, and vice versa. The effect of colorfulness adaptation was the strongest for the combination of natural adapting and natural test images. The fact that the naturalness of the spatial structure in an image affects the strength of the adaptation effect implies that the recognition of natural scene would play an important role in the adaptation mechanism.

  16. Color engineering in the age of digital convergence

    Science.gov (United States)

    MacDonald, Lindsay W.

    1998-09-01

    Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.

  17. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghunday-Achin mineral district in Afghanistan, in Davis, P.A, compiler, Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    Science.gov (United States)

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghunday-Achin mineral district, which has magnesite and talc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  18. Kaleido: Visualizing Big Brain Data with Automatic Color Assignment for Single-Neuron Images.

    Science.gov (United States)

    Wang, Ting-Yuan; Chen, Nan-Yow; He, Guan-Wei; Wang, Guo-Tzau; Shih, Chi-Tin; Chiang, Ann-Shyn

    2018-03-03

    Effective 3D visualization is essential for connectomics analysis, where the number of neural images easily reaches over tens of thousands. A formidable challenge is to simultaneously visualize a large number of distinguishable single-neuron images, with reasonable processing time and memory for file management and 3D rendering. In the present study, we proposed an algorithm named "Kaleido" that can visualize up to at least ten thousand single neurons from the Drosophila brain using only a fraction of the memory traditionally required, without increasing computing time. Adding more brain neurons increases memory only nominally. Importantly, Kaleido maximizes color contrast between neighboring neurons so that individual neurons can be easily distinguished. Colors can also be assigned to neurons based on biological relevance, such as gene expression, neurotransmitters, and/or development history. For cross-lab examination, the identity of every neuron is retrievable from the displayed image. To demonstrate the effectiveness and tractability of the method, we applied Kaleido to visualize the 10,000 Drosophila brain neurons obtained from the FlyCircuit database ( http://www.flycircuit.tw/modules.php?name=kaleido ). Thus, Kaleido visualization requires only sensible computer memory for manual examination of big connectomics data.

  19. Color image cryptosystem using Fresnel diffraction and phase modulation in an expanded fractional Fourier transform domain

    Science.gov (United States)

    Chen, Hang; Liu, Zhengjun; Chen, Qi; Blondel, Walter; Varis, Pierre

    2018-05-01

    In this letter, what we believe is a new technique for optical color image encryption by using Fresnel diffraction and a phase modulation in an extended fractional Fourier transform domain is proposed. Different from the RGB component separation based method, the color image is converted into one component by improved Chirikov mapping. The encryption system is addressed with Fresnel diffraction and phase modulation. A pair of lenses is placed into the fractional Fourier transform system for the modulation of beam propagation. The structure parameters of the optical system and parameters in Chirikov mapping serve as extra keys. Some numerical simulations are given to test the validity of the proposed cryptosystem.

  20. A simple and inexpensive high resolution color ratiometric planar optode imaging approach: application to oxygen and pH sensing

    DEFF Research Database (Denmark)

    Larsen, M.; Borisov, S. M.; Grunwald, B.

    2011-01-01

    A simple, high resolution colormetric planar optode imaging approach is presented. The approach is simple and inexpensive yet versatile, and can be used to study the two-dimensional distribution and dynamics of a range of analytes. The imaging approach utilizes the inbuilt color filter of standard...... commercial digital single lens reflex cameras to simultaneously record different colors (red, green, and blue) of luminophore emission light using only one excitation light source. Using the ratio between the intensity of the different colors recorded in a single image analyte concentrations can...... be calculated. The robustness of the approach is documented by obtaining high resolution data of O-2 and pH distributions in marine sediments using easy synthesizable sensors. The sensors rely on the platinum(II) octaethylporphyrin (PtOEP) and lipophilic 8-Hydroxy-1,3,6-pyrenetrisulfonic acid trisodium (HPTS...