WorldWideScience

Sample records for hyperspectral microarray scanner

  1. Improved Scanners for Microscopic Hyperspectral Imaging

    Science.gov (United States)

    Mao, Chengye

    2009-01-01

    Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version

  2. Parallel scan hyperspectral fluorescence imaging system and biomedical application for microarrays

    International Nuclear Information System (INIS)

    Liu Zhiyi; Ma Suihua; Liu Le; Guo Jihua; He Yonghong; Ji Yanhong

    2011-01-01

    Microarray research offers great potential for analysis of gene expression profile and leads to greatly improved experimental throughput. A number of instruments have been reported for microarray detection, such as chemiluminescence, surface plasmon resonance, and fluorescence markers. Fluorescence imaging is popular for the readout of microarrays. In this paper we develop a quasi-confocal, multichannel parallel scan hyperspectral fluorescence imaging system for microarray research. Hyperspectral imaging records the entire emission spectrum for every voxel within the imaged area in contrast to recording only fluorescence intensities of filter-based scanners. Coupled with data analysis, the recorded spectral information allows for quantitative identification of the contributions of multiple, spectrally overlapping fluorescent dyes and elimination of unwanted artifacts. The mechanism of quasi-confocal imaging provides a high signal-to-noise ratio, and parallel scan makes this approach a high throughput technique for microarray analysis. This system is improved with a specifically designed spectrometer which can offer a spectral resolution of 0.2 nm, and operates with spatial resolutions ranging from 2 to 30 μm . Finally, the application of the system is demonstrated by reading out microarrays for identification of bacteria.

  3. Restoration of Hyperspectral Push-Broom Scanner Data

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Conradsen, Knut

    1997-01-01

    Several effects combine to distort the multispectral data that are obtained from push-broom scanners. We develop an algorithm for restoration of such data, illustrated on images from the ROSIS scanner. In push-broom scanners variation between elements in the detector array results in a strong...... back into the original spectral space results in noise corrected variables. The noise components will now have been removed from the entire original data set by working on a smaller set of noise contaminated transformed variables only. The application of the above techniques results in a dramatic...

  4. Detection of analyte binding to microarrays using gold nanoparticle labels and a desktop scanner

    DEFF Research Database (Denmark)

    Han, Anpan; Dufva, Martin; Belleville, Erik

    2003-01-01

    on gold nanoparticle labeled antibodies visualized by a commercial, office desktop flatbed scanner. Scanning electron microscopy studies showed that the signal from the flatbed scanner was proportional to the surface density of the bound antibody-gold conjugates, and that the flatbed scanner could detect...... six attomoles of antibody-gold conjugates. This detection system was used in a competitive immunoassay to measure the concentration of the pesticide metabolite 2,6-dichlorobenzamide (BAM) in water samples. The results showed that the gold labeled antibodies functioned comparably with a fluorescent...... based immunoassay for detecting BAM in water. A qualitative immunoassay based on gold-labeled antibodies could determineif a water sample contained BAM above and below 60-70 ng L(-1), which is below the maximum allowed BAM concentration for drinking water (100 ng L(-1)) according to European Union...

  5. Thermal remote sensing from Airborne Hyperspectral Scanner data in the framework of the SPARC and SEN2FLEX projects: an overview

    Directory of Open Access Journals (Sweden)

    Q. Shen

    2009-11-01

    Full Text Available The AHS (Airborne Hyperspectral Scanner instrument has 80 spectral bands covering the visible and near infrared (VNIR, short wave infrared (SWIR, mid infrared (MIR and thermal infrared (TIR spectral range. The instrument is operated by Instituto Nacional de Técnica Aerospacial (INTA, and it has been involved in several field campaigns since 2004.

    This paper presents an overview of the work performed with the AHS thermal imagery provided in the framework of the SPARC and SEN2FLEX campaigns, carried out respectively in 2004 and 2005 over an agricultural area in Spain. The data collected in both campaigns allowed for the first time the development and testing of algorithms for land surface temperature and emissivity retrieval as well as the estimation of evapotranspiration from AHS data. Errors were found to be around 1.5 K for land surface temperature and 1 mm/day for evapotranspiration.

  6. High Throughput, Label-free Screening Small Molecule Compound Libraries for Protein-Ligands using Combination of Small Molecule Microarrays and a Special Ellipsometry-based Optical Scanner.

    Science.gov (United States)

    Landry, James P; Fei, Yiyan; Zhu, X D

    2011-12-01

    Small-molecule compounds remain the major source of therapeutic and preventative drugs. Developing new drugs against a protein target often requires screening large collections of compounds with diverse structures for ligands or ligand fragments that exhibit sufficiently affinity and desirable inhibition effect on the target before further optimization and development. Since the number of small molecule compounds is large, high-throughput screening (HTS) methods are needed. Small-molecule microarrays (SMM) on a solid support in combination with a suitable binding assay form a viable HTS platform. We demonstrate that by combining an oblique-incidence reflectivity difference optical scanner with SMM we can screen 10,000 small-molecule compounds on a single glass slide for protein ligands without fluorescence labeling. Furthermore using such a label-free assay platform we can simultaneously acquire binding curves of a solution-phase protein to over 10,000 immobilized compounds, thus enabling full characterization of protein-ligand interactions over a wide range of affinity constants.

  7. Scanner calibration revisited

    Directory of Open Access Journals (Sweden)

    Pozhitkov Alexander E

    2010-07-01

    Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.

  8. Carbohydrate microarrays

    DEFF Research Database (Denmark)

    Park, Sungjin; Gildersleeve, Jeffrey C; Blixt, Klas Ola

    2012-01-01

    In the last decade, carbohydrate microarrays have been core technologies for analyzing carbohydrate-mediated recognition events in a high-throughput fashion. A number of methods have been exploited for immobilizing glycans on the solid surface in a microarray format. This microarray...... of substrate specificities of glycosyltransferases. This review covers the construction of carbohydrate microarrays, detection methods of carbohydrate microarrays and their applications in biological and biomedical research....

  9. Multipurpose Hyperspectral Imaging System

    Science.gov (United States)

    Mao, Chengye; Smith, David; Lanoue, Mark A.; Poole, Gavin H.; Heitschmidt, Jerry; Martinez, Luis; Windham, William A.; Lawrence, Kurt C.; Park, Bosoon

    2005-01-01

    A hyperspectral imaging system of high spectral and spatial resolution that incorporates several innovative features has been developed to incorporate a focal plane scanner (U.S. Patent 6,166,373). This feature enables the system to be used for both airborne/spaceborne and laboratory hyperspectral imaging with or without relative movement of the imaging system, and it can be used to scan a target of any size as long as the target can be imaged at the focal plane; for example, automated inspection of food items and identification of single-celled organisms. The spectral resolution of this system is greater than that of prior terrestrial multispectral imaging systems. Moreover, unlike prior high-spectral resolution airborne and spaceborne hyperspectral imaging systems, this system does not rely on relative movement of the target and the imaging system to sweep an imaging line across a scene. This compact system (see figure) consists of a front objective mounted at a translation stage with a motorized actuator, and a line-slit imaging spectrograph mounted within a rotary assembly with a rear adaptor to a charged-coupled-device (CCD) camera. Push-broom scanning is carried out by the motorized actuator which can be controlled either manually by an operator or automatically by a computer to drive the line-slit across an image at a focal plane of the front objective. To reduce the cost, the system has been designed to integrate as many as possible off-the-shelf components including the CCD camera and spectrograph. The system has achieved high spectral and spatial resolutions by using a high-quality CCD camera, spectrograph, and front objective lens. Fixtures for attachment of the system to a microscope (U.S. Patent 6,495,818 B1) make it possible to acquire multispectral images of single cells and other microscopic objects.

  10. Scintillation scanner

    International Nuclear Information System (INIS)

    Mehrbrodt, A.W.; Mog, W.F.; Brunnett, C.J.

    1977-01-01

    A scintillation scanner having a visual image producing means coupled through a lost motion connection to the boom which supports the scintillation detector is described. The lost motion connection is adjustable to compensate for such delays as may occur between sensing and recording scintillations. 13 claims, 5 figures

  11. Hyperspectral remote sensing

    National Research Council Canada - National Science Library

    Eismann, Michael Theodore

    2012-01-01

    ..., and hyperspectral data processing. While there are many resources that suitably cover these areas individually and focus on specific aspects of the hyperspectral remote sensing field, this book provides a holistic treatment...

  12. Hyperspectral imaging flow cytometer

    Science.gov (United States)

    Sinclair, Michael B.; Jones, Howland D. T.

    2017-10-25

    A hyperspectral imaging flow cytometer can acquire high-resolution hyperspectral images of particles, such as biological cells, flowing through a microfluidic system. The hyperspectral imaging flow cytometer can provide detailed spatial maps of multiple emitting species, cell morphology information, and state of health. An optimized system can image about 20 cells per second. The hyperspectral imaging flow cytometer enables many thousands of cells to be characterized in a single session.

  13. Hyperspectral remote sensing

    CERN Document Server

    Eismann, Michael

    2012-01-01

    Hyperspectral remote sensing is an emerging, multidisciplinary field with diverse applications that builds on the principles of material spectroscopy, radiative transfer, imaging spectrometry, and hyperspectral data processing. This book provides a holistic treatment that captures its multidisciplinary nature, emphasizing the physical principles of hyperspectral remote sensing.

  14. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  15. Was the Scanner Calibration Slide used for its intended purpose?

    Directory of Open Access Journals (Sweden)

    Zong Yaping

    2011-04-01

    Full Text Available Abstract In the article, Scanner calibration revisited, BMC Bioinformatics 2010, 11:361, Dr. Pozhitkov used the Scanner Calibration Slide, a key product of Full Moon BioSystems to generate data in his study of microarray scanner PMT response and proposed a mathematic model for PMT response 1. In the end, the author concluded that "Full Moon BioSystems calibration slides are inadequate for performing calibration," and recommended "against using these slides." We found these conclusions are seriously flawed and misleading, and his recommendation against using the Scanner Calibration Slide was not properly supported.

  16. Hyperspectral cytometry.

    Science.gov (United States)

    Grégori, Gérald; Rajwa, Bartek; Patsekin, Valery; Jones, James; Furuki, Motohiro; Yamamoto, Masanobu; Paul Robinson, J

    2014-01-01

    Hyperspectral cytometry is an emerging technology for single-cell analysis that combines ultrafast optical spectroscopy and flow cytometry. Spectral cytometry systems utilize diffraction gratings or prism-based monochromators to disperse fluorescence signals from multiple labels (organic dyes, nanoparticles, or fluorescent proteins) present in each analyzed bioparticle onto linear detector arrays such as multianode photomultipliers or charge-coupled device sensors. The resultant data, consisting of a series of characterizing every analyzed cell, are not compensated by employing the traditional cytometry approach, but rather are spectrally unmixed utilizing algorithms such as constrained Poisson regression or non-negative matrix factorization. Although implementations of spectral cytometry were envisioned as early as the 1980s, only recently has the development of highly sensitive photomultiplier tube arrays led to design and construction of functional prototypes and subsequently to introduction of commercially available systems. This chapter summarizes the historical efforts and work in the field of spectral cytometry performed at Purdue University Cytometry Laboratories and describes the technology developed by Sony Corporation that resulted in release of the first commercial spectral cytometry system-the Sony SP6800. A brief introduction to spectral data analysis is also provided, with emphasis on the differences between traditional polychromatic and spectral cytometry approaches.

  17. Hyperspectral sensing of forests

    Science.gov (United States)

    Goodenough, David G.; Dyk, Andrew; Chen, Hao; Hobart, Geordie; Niemann, K. Olaf; Richardson, Ash

    2007-11-01

    Canada contains 10% of the world's forests covering an area of 418 million hectares. The sustainable management of these forest resources has become increasingly complex. Hyperspectral remote sensing can provide a wealth of new and improved information products to resource managers to make more informed decisions. Research in this area has demonstrated that hyperspectral remote sensing can be used to create more accurate products for forest inventory, forest health, foliar biochemistry, biomass, and aboveground carbon than are currently available. This paper surveys recent methods and results in hyperspectral sensing of forests and describes space initiatives for hyperspectral sensing.

  18. Analytic Hyperspectral Sensing

    National Research Council Canada - National Science Library

    Coifman, Ronald R

    2005-01-01

    In the last year (no-cost extension), Plain Sight Systems reached the goal of successfully building its second NIR standoff hyperspectral imaging system, NSTIS, the Near-Infrared Spectral Target Identification System...

  19. EVALUATING THE POTENTIAL OF SATELLITE HYPERSPECTRAL RESURS-P DATA FOR FOREST SPECIES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    O. Brovkina

    2016-06-01

    Full Text Available Satellite-based hyperspectral sensors provide spectroscopic information in relatively narrow contiguous spectral bands over a large area which can be useful in forestry applications. This study evaluates the potential of satellite hyperspectral Resurs-P data for forest species mapping. Firstly, a comparative study between top of canopy reflectance obtained from the Resurs-P, from the airborne hyperspectral scanner CASI and from field measurement (FieldSpec ASD 4 on selected vegetation cover types is conducted. Secondly, Resurs-P data is tested in classification and verification of different forest species compartments. The results demonstrate that satellite hyperspectral Resurs-P sensor can produce useful informational and show good performance for forest species classification comparable both with forestry map and classification from airborne CASI data, but also indicate that developments in pre-processing steps are still required to improve the mapping level.

  20. Twisting wire scanner

    Energy Technology Data Exchange (ETDEWEB)

    Gharibyan, V.; Delfs, A.; Koruptchenkov, I.; Noelle, D.; Tiessen, H.; Werner, M.; Wittenburg, K.

    2012-11-15

    A new type of 'two-in-one' wire scanner is proposed. Recent advances in linear motors' technology make it possible to combine translational and rotational movements. This will allow to scan the beam in two perpendicular directions using a single driving motor and a special fork attached to it. Vertical or horizontal mounting will help to escape problems associated with the 45 deg scanners. Test results of the translational part with linear motors is presented.

  1. Twisting wire scanner

    International Nuclear Information System (INIS)

    Gharibyan, V.; Delfs, A.; Koruptchenkov, I.; Noelle, D.; Tiessen, H.; Werner, M.; Wittenburg, K.

    2012-11-01

    A new type of 'two-in-one' wire scanner is proposed. Recent advances in linear motors' technology make it possible to combine translational and rotational movements. This will allow to scan the beam in two perpendicular directions using a single driving motor and a special fork attached to it. Vertical or horizontal mounting will help to escape problems associated with the 45 deg scanners. Test results of the translational part with linear motors is presented.

  2. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo

    2016-01-01

    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  3. Plasmonically amplified fluorescence bioassay with microarray format

    Science.gov (United States)

    Gogalic, S.; Hageneder, S.; Ctortecka, C.; Bauch, M.; Khan, I.; Preininger, Claudia; Sauer, U.; Dostalek, J.

    2015-05-01

    Plasmonic amplification of fluorescence signal in bioassays with microarray detection format is reported. A crossed relief diffraction grating was designed to couple an excitation laser beam to surface plasmons at the wavelength overlapping with the absorption and emission bands of fluorophore Dy647 that was used as a label. The surface of periodically corrugated sensor chip was coated with surface plasmon-supporting gold layer and a thin SU8 polymer film carrying epoxy groups. These groups were employed for the covalent immobilization of capture antibodies at arrays of spots. The plasmonic amplification of fluorescence signal on the developed microarray chip was tested by using interleukin 8 sandwich immunoassay. The readout was performed ex situ after drying the chip by using a commercial scanner with high numerical aperture collecting lens. Obtained results reveal the enhancement of fluorescence signal by a factor of 5 when compared to a regular glass chip.

  4. Hyperspectral remote sensing for light pollution monitoring

    Directory of Open Access Journals (Sweden)

    P. Marcoionni

    2006-06-01

    Full Text Available industries. In this paper we introduce the results from a remote sensing campaign performed in September 2001 at night time. For the first time nocturnal light pollution was measured at high spatial and spectral resolution using two airborne hyperspectral sensors, namely the Multispectral Infrared and Visible Imaging Spectrometer (MIVIS and the Visible InfraRed Scanner (VIRS-200. These imagers, generally employed for day-time Earth remote sensing, were flown over the Tuscany coast (Italy on board of a Casa 212/200 airplane from an altitude of 1.5-2.0 km. We describe the experimental activities which preceded the remote sensing campaign, the optimization of sensor configuration, and the images as far acquired. The obtained results point out the novelty of the performed measurements and highlight the need to employ advanced remote sensing techniques as a spectroscopic tool for light pollution monitoring.

  5. Hyperspectral fundus imager

    Science.gov (United States)

    Truitt, Paul W.; Soliz, Peter; Meigs, Andrew D.; Otten, Leonard John, III

    2000-11-01

    A Fourier Transform hyperspectral imager was integrated onto a standard clinical fundus camera, a Zeiss FF3, for the purposes of spectrally characterizing normal anatomical and pathological features in the human ocular fundus. To develop this instrument an existing FDA approved retinal camera was selected to avoid the difficulties of obtaining new FDA approval. Because of this, several unusual design constraints were imposed on the optical configuration. Techniques to calibrate the sensor and to define where the hyperspectral pushbroom stripe was located on the retina were developed, including the manufacturing of an artificial eye with calibration features suitable for a spectral imager. In this implementation the Fourier transform hyperspectral imager can collect over a hundred 86 cm-1 spectrally resolved bands with 12 micro meter/pixel spatial resolution within the 1050 nm to 450 nm band. This equates to 2 nm to 8 nm spectral resolution depending on the wavelength. For retinal observations the band of interest tends to lie between 475 nm and 790 nm. The instrument has been in use over the last year successfully collecting hyperspectral images of the optic disc, retinal vessels, choroidal vessels, retinal backgrounds, and macula diabetic macular edema, and lesions of age-related macular degeneration.

  6. NMR-CT scanner

    International Nuclear Information System (INIS)

    Kose, Katsumi; Sato, Kozo; Sugimoto, Hiroshi; Sato, Masataka.

    1983-01-01

    A brief explanation is made on the imaging methods for a practical diagnostic NMR-CT scanner : A whole-body NMR-CT scanner utilizing a resistive magnet has been developed by Toshiba in cooperation with the Institute for Solid State Physics, the University of Tokyo. Typical NMR-CT images of volunteers and patients obtained in the clinical experiments using this device are presented. Detailed specifications are also shown about the practical NMR-CTs which are to be put on the market after obtaining the government approval. (author)

  7. Fibre optic microarrays.

    Science.gov (United States)

    Walt, David R

    2010-01-01

    This tutorial review describes how fibre optic microarrays can be used to create a variety of sensing and measurement systems. This review covers the basics of optical fibres and arrays, the different microarray architectures, and describes a multitude of applications. Such arrays enable multiplexed sensing for a variety of analytes including nucleic acids, vapours, and biomolecules. Polymer-coated fibre arrays can be used for measuring microscopic chemical phenomena, such as corrosion and localized release of biochemicals from cells. In addition, these microarrays can serve as a substrate for fundamental studies of single molecules and single cells. The review covers topics of interest to chemists, biologists, materials scientists, and engineers.

  8. Ionization beam scanner

    CERN Multimedia

    CERN PhotoLab

    1973-01-01

    Inner structure of an ionization beam scanner, a rather intricate piece of apparatus which permits one to measure the density distribution of the proton beam passing through it. On the outside of the tank wall there is the coil for the longitudinal magnetic field, on the inside, one can see the arrangement of electrodes creating a highly homogeneous transverse electric field.

  9. DNA Microarray Technology

    Science.gov (United States)

    Skip to main content DNA Microarray Technology Enter Search Term(s): Español Research Funding An Overview Bioinformatics Current Grants Education and Training Funding Extramural Research News Features Funding Divisions Funding ...

  10. Simulation of Hyperspectral Images

    Science.gov (United States)

    Richsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.

    2004-01-01

    A software package generates simulated hyperspectral imagery for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport, as well as reflections from surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, "ground truth" is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces, as well as the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for, and a supplement to, field validation data.

  11. Planetary Hyperspectral Imager (PHI)

    Science.gov (United States)

    Silvergate, Peter

    1996-01-01

    A hyperspectral imaging spectrometer was breadboarded. Key innovations were use of a sapphire prism and single InSb focal plane to cover the entire spectral range, and a novel slit optic and relay optics to reduce thermal background. Operation over a spectral range of 450 - 4950 nm (approximately 3.5 spectral octaves) was demonstrated. Thermal background reduction by a factor of 8 - 10 was also demonstrated.

  12. DNA Microarray Technology; TOPICAL

    International Nuclear Information System (INIS)

    WERNER-WASHBURNE, MARGARET; DAVIDSON, GEORGE S.

    2002-01-01

    Collaboration between Sandia National Laboratories and the University of New Mexico Biology Department resulted in the capability to train students in microarray techniques and the interpretation of data from microarray experiments. These studies provide for a better understanding of the role of stationary phase and the gene regulation involved in exit from stationary phase, which may eventually have important clinical implications. Importantly, this research trained numerous students and is the basis for three new Ph.D. projects

  13. Infrared upconversion hyperspectral imaging

    DEFF Research Database (Denmark)

    Kehlet, Louis Martinus; Tidemand-Lichtenberg, Peter; Dam, Jeppe Seidelin

    2015-01-01

    In this Letter, hyperspectral imaging in the mid-IR spectral region is demonstrated based on nonlinear frequency upconversion and subsequent imaging using a standard Si-based CCD camera. A series of upconverted images are acquired with different phase match conditions for the nonlinear frequency...... conversion process. From this, a sequence of monochromatic images in the 3.2-3.4 mu m range is generated. The imaged object consists of a standard United States Air Force resolution target combined with a polystyrene film, resulting in the presence of both spatial and spectral information in the infrared...... image. (C) 2015 Optical Society of America...

  14. Sparse Representations of Hyperspectral Images

    KAUST Repository

    Swanson, Robin J.

    2015-01-01

    Hyperspectral image data has long been an important tool for many areas of sci- ence. The addition of spectral data yields significant improvements in areas such as object and image classification, chemical and mineral composition detection, and astronomy. Traditional capture methods for hyperspectral data often require each wavelength to be captured individually, or by sacrificing spatial resolution. Recently there have been significant improvements in snapshot hyperspectral captures using, in particular, compressed sensing methods. As we move to a compressed sensing image formation model the need for strong image priors to shape our reconstruction, as well as sparse basis become more important. Here we compare several several methods for representing hyperspectral images including learned three dimensional dictionaries, sparse convolutional coding, and decomposable nonlocal tensor dictionaries. Addi- tionally, we further explore their parameter space to identify which parameters provide the most faithful and sparse representations.

  15. Sparse Representations of Hyperspectral Images

    KAUST Repository

    Swanson, Robin J.

    2015-11-23

    Hyperspectral image data has long been an important tool for many areas of sci- ence. The addition of spectral data yields significant improvements in areas such as object and image classification, chemical and mineral composition detection, and astronomy. Traditional capture methods for hyperspectral data often require each wavelength to be captured individually, or by sacrificing spatial resolution. Recently there have been significant improvements in snapshot hyperspectral captures using, in particular, compressed sensing methods. As we move to a compressed sensing image formation model the need for strong image priors to shape our reconstruction, as well as sparse basis become more important. Here we compare several several methods for representing hyperspectral images including learned three dimensional dictionaries, sparse convolutional coding, and decomposable nonlocal tensor dictionaries. Addi- tionally, we further explore their parameter space to identify which parameters provide the most faithful and sparse representations.

  16. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processi...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....... will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology...

  17. Image Segmentation of Hyperspectral Imagery

    National Research Council Canada - National Science Library

    Wellman, Mark

    2003-01-01

    .... Army tactical applications. An important tactical application of infrared (IR) hyperspectral imagery is the detection of low-contrast targets, including those targets that may employ camouflage, concealment, and deception (CCD) techniques 1, 2...

  18. Hyperspectral image analysis. A tutorial

    International Nuclear Information System (INIS)

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  19. Whole body line scanner

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    A bar-shaped scintillator converts incident collimated gamma rays to light pulses which are detected by a row of photoelectric tubes positioned along the output face of the scintillator wherein each tube has a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to the output of phototubes develops the scintillation event x-axis position coordinate electrical signal with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as possible to obtain reduced distortion in the field of view and improved spatial resolution. A mechanical drive of the scanner results in an image of the gamma ray source being formed by sequencing the developed scintillation position coordinate signals in the y-axis dimension

  20. Normalization for triple-target microarray experiments

    Directory of Open Access Journals (Sweden)

    Magniette Frederic

    2008-04-01

    Full Text Available Abstract Background Most microarray studies are made using labelling with one or two dyes which allows the hybridization of one or two samples on the same slide. In such experiments, the most frequently used dyes are Cy3 and Cy5. Recent improvements in the technology (dye-labelling, scanner and, image analysis allow hybridization up to four samples simultaneously. The two additional dyes are Alexa488 and Alexa494. The triple-target or four-target technology is very promising, since it allows more flexibility in the design of experiments, an increase in the statistical power when comparing gene expressions induced by different conditions and a scaled down number of slides. However, there have been few methods proposed for statistical analysis of such data. Moreover the lowess correction of the global dye effect is available for only two-color experiments, and even if its application can be derived, it does not allow simultaneous correction of the raw data. Results We propose a two-step normalization procedure for triple-target experiments. First the dye bleeding is evaluated and corrected if necessary. Then the signal in each channel is normalized using a generalized lowess procedure to correct a global dye bias. The normalization procedure is validated using triple-self experiments and by comparing the results of triple-target and two-color experiments. Although the focus is on triple-target microarrays, the proposed method can be used to normalize p differently labelled targets co-hybridized on a same array, for any value of p greater than 2. Conclusion The proposed normalization procedure is effective: the technical biases are reduced, the number of false positives is under control in the analysis of differentially expressed genes, and the triple-target experiments are more powerful than the corresponding two-color experiments. There is room for improving the microarray experiments by simultaneously hybridizing more than two samples.

  1. Coastal Zone Color Scanner

    Science.gov (United States)

    Johnson, B.

    1988-01-01

    The Coastal Zone Color Scanner (CZCS) spacecraft ocean color instrument is capable of measuring and mapping global ocean surface chlorophyll concentration. It is a scanning radiometer with multiband capability. With new electronics and some mechanical, and optical re-work, it probably can be made flight worthy. Some additional components of a second flight model are also available. An engineering study and further tests are necessary to determine exactly what effort is required to properly prepare the instrument for spaceflight and the nature of interfaces to prospective spacecraft. The CZCS provides operational instrument capability for monitoring of ocean productivity and currents. It could be a simple, low cost alternative to developing new instruments for ocean color imaging. Researchers have determined that with global ocean color data they can: specify quantitatively the role of oceans in the global carbon cycle and other major biogeochemical cycles; determine the magnitude and variability of annual primary production by marine phytoplankton on a global scale; understand the fate of fluvial nutrients and their possible affect on carbon budgets; elucidate the coupling mechanism between upwelling and large scale patterns in ocean basins; answer questions concerning the large scale distribution and timing of spring blooms in the global ocean; acquire a better understanding of the processes associated with mixing along the edge of eddies, coastal currents, western boundary currents, etc., and acquire global data on marine optical properties.

  2. Radiographic scanner apparatus

    International Nuclear Information System (INIS)

    Wake, R.H.

    1980-01-01

    The preferred embodiment of this invention includes a hardware system, or processing means, which operates faster than software. Moreover the computer needed is less expensive and smaller. Radiographic scanner apparatus is described for measuring the intensity of radiation after passage through a planar region and for reconstructing a representation of the attenuation of radiation by the medium. There is a source which can be rotated, and detectors, the output from which forms a data line. The detectors are disposed opposite the planar region from the source to produce a succession of data lines corresponding to the succession of angular orientations of the source. There is a convolver means for convolving each of these data lines, with a filter function, and a means of processing the convolved data lines to create the representation of the radiation attenuation in the planar region. There is also apparatus to generate a succession of data lines indicating radiation attenuation along a determinable path with convolver means. (U.K.)

  3. The cobalt-60 container scanner

    International Nuclear Information System (INIS)

    Jigang, A.; Liye, Z.; Yisi, L.; Haifeng, W.; Zhifang, W.; Liqiang, W.; Yuanshi, Z.; Xincheng, X.; Furong, L.; Baozeng, G.; Chunfa, S.

    1997-01-01

    The Institute of Nuclear Energy Technology (INET) has successfully designed and constructed a container (cargo) scanner, which uses cobalt-60 of 100-300 Ci as radiation source. The following performances of the Cobalt-60 container scanner have been achieved at INET: a) IQI (Image Quality Indicator) - 2.5% behind 100 mm of steel; b) CI (Contrast Indicator) - 0.7% behind 100 mm of steel; c) SP (Steel Penetration) - 240 mm of steel; d) Maximum Dose per Scanning - 0.02mGy; e) Throughput - twenty 40-foot containers per hour. These performances are equal or similar to those of the accelerator scanners. Besides these nice enough inspection properties, the Cobalt-60 scanner possesses many other special features which are better than accelerator scanners: a) cheap price - it will be only or two tenths of the accelerator scanner's; b) low radiation intensity - the radiation protection problem is much easier to solve and a lot of money can be saved on the radiation shielding building; c) much smaller area for installation and operation; d) simple operation and convenient maintenance; e) high reliability and stability. The Cobalt-60 container (or cargo) scanner is satisfied for boundary customs, seaports, airports and railway stations etc. Because of the nice special features said above, it is more suitable to be applied widely. Its high properties and low price will make it have much better application prospects

  4. Side scanner for supermarkets: a new scanner design standard

    Science.gov (United States)

    Cheng, Charles K.; Cheng, J. K.

    1996-09-01

    High speed UPC bar code has become a standard mode of data capture for supermarkets in the US, Europe, and Japan. The influence of the ergonomics community on the design of the scanner is evident. During the past decade the ergonomic issues of cashier in check-outs has led to occupational hand-wrist cumulative trauma disorders, in most cases causing carpal tunnel syndrome, a permanent hand injury. In this paper, the design of a side scanner to resolve the issues is discussed. The complex optical module and the sensor for aforesaid side scanner is described. The ergonomic advantages offer the old counter mounted vertical scanner has been experimentally proved by the industrial funded study at an independent university.

  5. Medical hyperspectral imaging: a review

    Science.gov (United States)

    Lu, Guolan; Fei, Baowei

    2014-01-01

    Abstract. Hyperspectral imaging (HSI) is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the tissue physiology, morphology, and composition. This review paper presents an overview of the literature on medical hyperspectral imaging technology and its applications. The aim of the survey is threefold: an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application. PMID:24441941

  6. Nogle muligheder i scanner data

    DEFF Research Database (Denmark)

    Juhl, Hans Jørn

    2000-01-01

    I artiklen gives en diskussion af en række af de muligheder for effektivisering af marketingaktiviteter, der er til stede for såvel mærkevareudbyder som detaillist, ved udnyttelse af information fra scanner data......I artiklen gives en diskussion af en række af de muligheder for effektivisering af marketingaktiviteter, der er til stede for såvel mærkevareudbyder som detaillist, ved udnyttelse af information fra scanner data...

  7. EXTRACTING ROOF PARAMETERS AND HEAT BRIDGES OVER THE CITY OF OLDENBURG FROM HYPERSPECTRAL, THERMAL, AND AIRBORNE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    L. Bannehr

    2012-09-01

    Full Text Available Remote sensing methods are used to obtain different kinds of information about the state of the environment. Within the cooperative research project HiReSens, funded by the German BMBF, a hyperspectral scanner, an airborne laser scanner, a thermal camera, and a RGB-camera are employed on a small aircraft to determine roof material parameters and heat bridges of house tops over the city Oldenburg, Lower Saxony. HiReSens aims to combine various geometrical highly resolved data in order to achieve relevant evidence about the state of the city buildings. Thermal data are used to obtain the energy distribution of single buildings. The use of hyperspectral data yields information about material consistence of roofs. From airborne laser scanning data (ALS digital surface models are inferred. They build the basis to locate the best orientations for solar panels of the city buildings. The combination of the different data sets offers the opportunity to capitalize synergies between differently working systems. Central goals are the development of tools for the collection of heat bridges by means of thermal data, spectral collection of roofs parameters on basis of hyperspectral data as well as 3D-capture of buildings from airborne lasers scanner data. Collecting, analyzing and merging of the data are not trivial especially not when the resolution and accuracy is aimed in the domain of a few decimetre. The results achieved need to be regarded as preliminary. Further investigations are still required to prove the accuracy in detail.

  8. Wire Scanner Motion Control Card

    CERN Document Server

    Forde, S E

    2006-01-01

    Scientists require a certain beam quality produced by the accelerator rings at CERN. The discovery potential of LHC is given by the reachable luminosity at its interaction points. The luminosity is maximized by minimizing the beam size. Therefore an accurate beam size measurement is required for optimizing the luminosity. The wire scanner performs very accurate profile measurements, but as it can not be used at full intensity in the LHC ring, it is used for calibrating other profile monitors. As the current wire scanner system, which is used in the present CERN accelerators, has not been made for the required specification of the LHC, a new design of a wire scanner motion control card is part of the LHC wire scanner project. The main functions of this card are to control the wire scanner motion and to acquire the position of the wire. In case of further upgrades at a later stage, it is required to allow an easy update of the firmware, hence the programmable features of FPGAs will be used for this purpose. The...

  9. DNA microarrays : a molecular cloning manual

    National Research Council Canada - National Science Library

    Sambrook, Joseph; Bowtell, David

    2002-01-01

    .... DNA Microarrays provides authoritative, detailed instruction on the design, construction, and applications of microarrays, as well as comprehensive descriptions of the software tools and strategies...

  10. Multiband and Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raffaele Pizzolante

    2016-02-01

    Full Text Available Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.. We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

  11. Radiographic scanners and shutter mechanisms in CT scanners

    International Nuclear Information System (INIS)

    Braden, A.B.; Kuwik, J.J.; Taylor, S.K.; Covic, J.

    1981-01-01

    This patent claim relates especially to the design of a shutter mechanism in a CT scanner having a rotatable source of radiation and a series of stationary radiation detectors coplanar with the path of the source and spaced about the axis of rotation of the source, and only partially encircling the path of the source. (U.K.)

  12. Calibration and assessment of channel-specific biases in microarray data with extended dynamical range.

    Science.gov (United States)

    Bengtsson, Henrik; Jönsson, Göran; Vallon-Christersson, Johan

    2004-11-12

    Non-linearities in observed log-ratios of gene expressions, also known as intensity dependent log-ratios, can often be accounted for by global biases in the two channels being compared. Any step in a microarray process may introduce such offsets and in this article we study the biases introduced by the microarray scanner and the image analysis software. By scanning the same spotted oligonucleotide microarray at different photomultiplier tube (PMT) gains, we have identified a channel-specific bias present in two-channel microarray data. For the scanners analyzed it was in the range of 15-25 (out of 65,535). The observed bias was very stable between subsequent scans of the same array although the PMT gain was greatly adjusted. This indicates that the bias does not originate from a step preceding the scanner detector parts. The bias varies slightly between arrays. When comparing estimates based on data from the same array, but from different scanners, we have found that different scanners introduce different amounts of bias. So do various image analysis methods. We propose a scanning protocol and a constrained affine model that allows us to identify and estimate the bias in each channel. Backward transformation removes the bias and brings the channels to the same scale. The result is that systematic effects such as intensity dependent log-ratios are removed, but also that signal densities become much more similar. The average scan, which has a larger dynamical range and greater signal-to-noise ratio than individual scans, can then be obtained. The study shows that microarray scanners may introduce a significant bias in each channel. Such biases have to be calibrated for, otherwise systematic effects such as intensity dependent log-ratios will be observed. The proposed scanning protocol and calibration method is simple to use and is useful for evaluating scanner biases or for obtaining calibrated measurements with extended dynamical range and better precision. The

  13. Hyperspectral remote sensing of vegetation

    Science.gov (United States)

    Thenkabail, Prasad S.; Lyon, John G.; Huete, Alfredo

    2011-01-01

    Hyperspectral narrow-band (or imaging spectroscopy) spectral data are fast emerging as practical solutions in modeling and mapping vegetation. Recent research has demonstrated the advances in and merit of hyperspectral data in a range of applications including quantifying agricultural crops, modeling forest canopy biochemical properties, detecting crop stress and disease, mapping leaf chlorophyll content as it influences crop production, identifying plants affected by contaminants such as arsenic, demonstrating sensitivity to plant nitrogen content, classifying vegetation species and type, characterizing wetlands, and mapping invasive species. The need for significant improvements in quantifying, modeling, and mapping plant chemical, physical, and water properties is more critical than ever before to reduce uncertainties in our understanding of the Earth and to better sustain it. There is also a need for a synthesis of the vast knowledge spread throughout the literature from more than 40 years of research.

  14. Hyperspectral discrimination of camouflaged target

    Science.gov (United States)

    Bárta, Vojtěch; Racek, František

    2017-10-01

    The article deals with detection of camouflaged objects during winter season. Winter camouflage is a marginal affair in most countries due to short time period of the snow cover. In the geographical condition of Central Europe the winter period with snow occurs less than 1/12 of year. The LWIR or SWIR spectral areas are used for detection of camouflaged objects. For those spectral regions the difference in chemical composition and temperature express in spectral features. Exploitation of the LWIR and SWIR devices is demanding due to their large dimension and expensiveness. Therefore, the article deals with estimation of utilization of VIS region for detecting of camouflaged object on snow background. The multispectral image output for the various spectral filters is simulated. Hyperspectral indices are determined to detect the camouflaged objects in the winter. The multispectral image simulation is based on the hyperspectral datacube obtained in real conditions.

  15. Compensation strategies for PET scanners with unconventional scanner geometry

    CERN Document Server

    Gundlich, B; Oehler, M

    2006-01-01

    The small animal PET scanner ClearPET®Neuro, developed at the Forschungszentrum Julich GmbH in cooperation with the Crystal Clear Collaboration (CERN), represents scanners with an unconventional geometry: due to axial and transaxial detector gaps ClearPet®Neuro delivers inhomogeneous sinograms with missing data. When filtered backprojection (FBP) or Fourier rebinning (FORE) are applied, strong geometrical artifacts appear in the images. In this contribution we present a method that takes the geometrical sensitivity into account and converts the measured sinograms into homogeneous and complete data. By this means artifactfree images are achieved using FBP or FORE. Besides an advantageous measurement setup that reduces inhomogeneities and data gaps in the sinograms, a modification of the measured sinograms is necessary. This modification includes two steps: a geometrical normalization and corrections for missing data. To normalize the measured sinograms, computed sinograms are used that describe the geometric...

  16. Monte Carlo dose calibration in CT scanner

    International Nuclear Information System (INIS)

    Yadav, Poonam; Ramasubramanian, V.; Subbaiah, K.V.; Thayalan, K.

    2008-01-01

    Computed Tomography (CT) scanner is a high radiation imaging modality compared to radiography. The dose from a CT examination can vary greatly depending on the particular CT scanner used, the area of the body examined, and the operating parameters of the scan. CT is a major contributor to collective effective dose in diagnostic radiology. Apart from the clinical benefits, the widespread use of multislice scanner is increasing radiation level to patient in comparison with conventional CT scanner. So, it becomes necessary to increase awareness about the CT scanner. (author)

  17. Polyadenylation state microarray (PASTA) analysis.

    Science.gov (United States)

    Beilharz, Traude H; Preiss, Thomas

    2011-01-01

    Nearly all eukaryotic mRNAs terminate in a poly(A) tail that serves important roles in mRNA utilization. In the cytoplasm, the poly(A) tail promotes both mRNA stability and translation, and these functions are frequently regulated through changes in tail length. To identify the scope of poly(A) tail length control in a transcriptome, we developed the polyadenylation state microarray (PASTA) method. It involves the purification of mRNA based on poly(A) tail length using thermal elution from poly(U) sepharose, followed by microarray analysis of the resulting fractions. In this chapter we detail our PASTA approach and describe some methods for bulk and mRNA-specific poly(A) tail length measurements of use to monitor the procedure and independently verify the microarray data.

  18. HYPERSPECTRAL REMOTE SENSING WITH THE UAS "STUTTGARTER ADLER" – CHALLENGES, EXPERIENCES AND FIRST RESULTS

    Directory of Open Access Journals (Sweden)

    A. Buettner

    2013-08-01

    Full Text Available The UAS "Stuttgarter Adler" was designed as a flexible and cost-effective remote-sensing platform for acquisition of high quality environmental data. Different missions for precision agriculture applications and BRDF-research have been successfully performed with a multispectral camera system and a spectrometer as main payloads. Currently, an imaging spectrometer is integrated in the UAS as a new payload, which enables the recording of hyperspectral data in more than 200 spectral bands in the visible and near infrared spectrum. The recording principle of the hyperspectral instrument is based on a line scanner. Each line is stored as a matrix image with spectral information in one axis and spatial information in the other axis of the image. Besides a detailed specification of the system concept and instrument design, the calibration procedure of the hyperspectral sensor system is discussed and results of the laboratory calibration are presented. The complete processing chain of measurement data is described and first results of measurement-flights over agricultural test sites are presented.

  19. A Lateral Flow Protein Microarray for Rapid and Sensitive Antibody Assays

    Directory of Open Access Journals (Sweden)

    Helene Andersson-Svahn

    2011-11-01

    Full Text Available Protein microarrays are useful tools for highly multiplexed determination of presence or levels of clinically relevant biomarkers in human tissues and biofluids. However, such tools have thus far been restricted to laboratory environments. Here, we present a novel 384-plexed easy to use lateral flow protein microarray device capable of sensitive (< 30 ng/mL determination of antigen-specific antibodies in ten minutes of total assay time. Results were developed with gold nanobeads and could be recorded by a cell-phone camera or table top scanner. Excellent accuracy with an area under curve (AUC of 98% was achieved in comparison with an established glass microarray assay for 26 antigen-specific antibodies. We propose that the presented framework could find use in convenient and cost-efficient quality control of antibody production, as well as in providing a platform for multiplexed affinity-based assays in low-resource or mobile settings.

  20. Geometric calibration between PET scanner and structured light scanner

    DEFF Research Database (Denmark)

    Kjer, Hans Martin; Olesen, Oline Vinter; Paulsen, Rasmus Reinhold

    2011-01-01

    Head movements degrade the image quality of high resolution Positron Emission Tomography (PET) brain studies through blurring and artifacts. Manny image reconstruction methods allows for motion correction if the head position is tracked continuously during the study. Our method for motion tracking...... is a structured light scanner placed just above the patient tunnel on the High Resolution Research Tomograph (HRRT, Siemens). It continuously registers point clouds of a part of the patient's face. The relative motion is estimated as the rigid transformation between frames. A geometric calibration between...

  1. Multi- and hyperspectral geologic remote sensing: A review

    Science.gov (United States)

    van der Meer, Freek D.; van der Werff, Harald M. A.; van Ruitenbeek, Frank J. A.; Hecker, Chris A.; Bakker, Wim H.; Noomen, Marleen F.; van der Meijde, Mark; Carranza, E. John M.; Smeth, J. Boudewijn de; Woldai, Tsehaie

    2012-02-01

    Geologists have used remote sensing data since the advent of the technology for regional mapping, structural interpretation and to aid in prospecting for ores and hydrocarbons. This paper provides a review of multispectral and hyperspectral remote sensing data, products and applications in geology. During the early days of Landsat Multispectral scanner and Thematic Mapper, geologists developed band ratio techniques and selective principal component analysis to produce iron oxide and hydroxyl images that could be related to hydrothermal alteration. The advent of the Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER) with six channels in the shortwave infrared and five channels in the thermal region allowed to produce qualitative surface mineral maps of clay minerals (kaolinite, illite), sulfate minerals (alunite), carbonate minerals (calcite, dolomite), iron oxides (hematite, goethite), and silica (quartz) which allowed to map alteration facies (propylitic, argillic etc.). The step toward quantitative and validated (subpixel) surface mineralogic mapping was made with the advent of high spectral resolution hyperspectral remote sensing. This led to a wealth of techniques to match image pixel spectra to library and field spectra and to unravel mixed pixel spectra to pure endmember spectra to derive subpixel surface compositional information. These products have found their way to the mining industry and are to a lesser extent taken up by the oil and gas sector. The main threat for geologic remote sensing lies in the lack of (satellite) data continuity. There is however a unique opportunity to develop standardized protocols leading to validated and reproducible products from satellite remote sensing for the geology community. By focusing on geologic mapping products such as mineral and lithologic maps, geochemistry, P-T paths, fluid pathways etc. the geologic remote sensing community can bridge the gap with the geosciences community. Increasingly

  2. Common hyperspectral image database design

    Science.gov (United States)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  3. Complete-arch accuracy of intraoral scanners.

    Science.gov (United States)

    Treesh, Joshua C; Liacouras, Peter C; Taft, Robert M; Brooks, Daniel I; Raiciulescu, Sorana; Ellert, Daniel O; Grant, Gerald T; Ye, Ling

    2018-04-30

    Intraoral scanners have shown varied results in complete-arch applications. The purpose of this in vitro study was to evaluate the complete-arch accuracy of 4 intraoral scanners based on trueness and precision measurements compared with a known reference (trueness) and with each other (precision). Four intraoral scanners were evaluated: CEREC Bluecam, CEREC Omnicam, TRIOS Color, and Carestream CS 3500. A complete-arch reference cast was created and printed using a 3-dimensional dental cast printer with photopolymer resin. The reference cast was digitized using a laboratory-based white light 3-dimensional scanner. The printed reference cast was scanned 10 times with each intraoral scanner. The digital standard tessellation language (STL) files from each scanner were then registered to the reference file and compared with differences in trueness and precision using a 3-dimensional modeling software. Additionally, scanning time was recorded for each scan performed. The Wilcoxon signed rank, Kruskal-Wallis, and Dunn tests were used to detect differences for trueness, precision, and scanning time (α=.05). Carestream CS 3500 had the lowest overall trueness and precision compared with Bluecam and TRIOS Color. The fourth scanner, Omnicam, had intermediate trueness and precision. All of the scanners tended to underestimate the size of the reference file, with exception of the Carestream CS 3500, which was more variable. Based on visual inspection of the color rendering of signed differences, the greatest amount of error tended to be in the posterior aspects of the arch, with local errors exceeding 100 μm for all scans. The single capture scanner Carestream CS 3500 had the overall longest scan times and was significantly slower than the continuous capture scanners TRIOS Color and Omnicam. Significant differences in both trueness and precision were found among the scanners. Scan times of the continuous capture scanners were faster than the single capture scanners

  4. Coastal Zone Color Scanner studies

    Science.gov (United States)

    Elrod, J.

    1988-01-01

    Activities over the past year have included cooperative work with a summer faculty fellow using the Coastal Zone Color Scanner (CZCS) imagery to study the effects of gradients in trophic resources on coral reefs in the Caribbean. Other research included characterization of ocean radiances specific to an acid-waste plume. Other activities include involvement in the quality control of imagery produced in the processing of the global CZCS data set, the collection of various other data global sets, and the subsequent data comparison and analysis.

  5. Hyperspectral forest monitoring and imaging implications

    Science.gov (United States)

    Goodenough, David G.; Bannon, David

    2014-05-01

    The forest biome is vital to the health of the earth. Canada and the United States have a combined forest area of 4.68 Mkm2. The monitoring of these forest resources has become increasingly complex. Hyperspectral remote sensing can provide a wealth of improved information products to land managers to make more informed decisions. Research in this area has demonstrated that hyperspectral remote sensing can be used to create more accurate products for forest inventory (major forest species), forest health, foliar biochemistry, biomass, and aboveground carbon. Operationally there is a requirement for a mix of airborne and satellite approaches. This paper surveys some methods and results in hyperspectral sensing of forests and discusses the implications for space initiatives with hyperspectral sensing

  6. Spherical stochastic neighbor embedding of hyperspectral data

    CSIR Research Space (South Africa)

    Lunga, D

    2012-07-01

    Full Text Available and manifold learning in Euclidean spaces, very few attempts have focused on non-Euclidean spaces. Here, we propose a novel approach that embeds hyperspectral data, transformed into bilateral probability similarities, onto a nonlinear unit norm coordinate...

  7. Target Detection Using an AOTF Hyperspectral Imager

    Science.gov (United States)

    Cheng, L-J.; Mahoney, J.; Reyes, F.; Suiter, H.

    1994-01-01

    This paper reports results of a recent field experiment using a prototype system to evaluate the acousto-optic tunable filter polarimetric hyperspectral imaging technology for target detection applications.

  8. DETERMINING SPECTRAL REFLECTANCE COEFFICIENTS FROM HYPERSPECTRAL IMAGES OBTAINED FROM LOW ALTITUDES

    Directory of Open Access Journals (Sweden)

    P. Walczykowski

    2016-06-01

    Full Text Available Remote Sensing plays very important role in many different study fields, like hydrology, crop management, environmental and ecosystem studies. For all mentioned areas of interest different remote sensing and image processing techniques, such as: image classification (object and pixel- based, object identification, change detection, etc. can be applied. Most of this techniques use spectral reflectance coefficients as the basis for the identification and distinction of different objects and materials, e.g. monitoring of vegetation stress, identification of water pollutants, yield identification, etc. Spectral characteristics are usually acquired using discrete methods such as spectrometric measurements in both laboratory and field conditions. Such measurements however can be very time consuming, which has led many international researchers to investigate the reliability and accuracy of using image-based methods. According to published and ongoing studies, in order to acquire these spectral characteristics from images, it is necessary to have hyperspectral data. The presented article describes a series of experiments conducted using the push-broom Headwall MicroHyperspec A-series VNIR. This hyperspectral scanner allows for registration of images with more than 300 spectral channels with a 1.9 nm spectral bandwidth in the 380- 1000 nm range. The aim of these experiments was to establish a methodology for acquiring spectral reflectance characteristics of different forms of land cover using such sensor. All research work was conducted in controlled conditions from low altitudes. Hyperspectral images obtained with this specific type of sensor requires a unique approach in terms of post-processing, especially radiometric correction. Large amounts of acquired imagery data allowed the authors to establish a new post- processing approach. The developed methodology allowed the authors to obtain spectral reflectance coefficients from a hyperspectral sensor

  9. Determining Spectral Reflectance Coefficients from Hyperspectral Images Obtained from Low Altitudes

    Science.gov (United States)

    Walczykowski, P.; Jenerowicz, A.; Orych, A.; Siok, K.

    2016-06-01

    Remote Sensing plays very important role in many different study fields, like hydrology, crop management, environmental and ecosystem studies. For all mentioned areas of interest different remote sensing and image processing techniques, such as: image classification (object and pixel- based), object identification, change detection, etc. can be applied. Most of this techniques use spectral reflectance coefficients as the basis for the identification and distinction of different objects and materials, e.g. monitoring of vegetation stress, identification of water pollutants, yield identification, etc. Spectral characteristics are usually acquired using discrete methods such as spectrometric measurements in both laboratory and field conditions. Such measurements however can be very time consuming, which has led many international researchers to investigate the reliability and accuracy of using image-based methods. According to published and ongoing studies, in order to acquire these spectral characteristics from images, it is necessary to have hyperspectral data. The presented article describes a series of experiments conducted using the push-broom Headwall MicroHyperspec A-series VNIR. This hyperspectral scanner allows for registration of images with more than 300 spectral channels with a 1.9 nm spectral bandwidth in the 380- 1000 nm range. The aim of these experiments was to establish a methodology for acquiring spectral reflectance characteristics of different forms of land cover using such sensor. All research work was conducted in controlled conditions from low altitudes. Hyperspectral images obtained with this specific type of sensor requires a unique approach in terms of post-processing, especially radiometric correction. Large amounts of acquired imagery data allowed the authors to establish a new post- processing approach. The developed methodology allowed the authors to obtain spectral reflectance coefficients from a hyperspectral sensor mounted on an

  10. Gamma scanner conceptual design report

    International Nuclear Information System (INIS)

    Swinth, K.L.

    1979-11-01

    The Fuels and Materials Examination Facility (FMEF) will include several stations for the nondestructive examination of irradiated fuels. One of these stations will be the gamma scanner which will be employed to detect gamma radiation from the irradiated fuel pins. The conceptual design of the gamma scan station is described. The gamma scanner will use a Standard Exam Stage (SES) as a positioner and transport mechanism for the fuel pins which it will obtain from a magazine. A pin guide mechanism mounted on the face of the collimator will assure that the fuel pins remain in front of the collimator during scanning. The collimator has remotely adjustable tungsten slits and can be manually rotated to align the slit at various angles. A shielded detector cart located in the operating corridor holds an intrinsic germanium detector and associated sodium-iodide anticoincidence detector. The electronics associated with the counting system consist of standard NIM modules to process the detector signals and a stand-alone multichannel analyzer (MCA) for counting data accumulation. Data from the MCA are bussed to the station computer for analysis and storage on magnetic tape. The station computer controls the collimator, the MCA, a source positioner and the SES through CAMAC-based interface hardware. Most of the electronic hardware is commercially available but some interfaces will require development. Conceptual drawings are included for mechanical hardware that must be designed and fabricated

  11. Direct calibration of PICKY-designed microarrays

    Directory of Open Access Journals (Sweden)

    Ronald Pamela C

    2009-10-01

    Full Text Available Abstract Background Few microarrays have been quantitatively calibrated to identify optimal hybridization conditions because it is difficult to precisely determine the hybridization characteristics of a microarray using biologically variable cDNA samples. Results Using synthesized samples with known concentrations of specific oligonucleotides, a series of microarray experiments was conducted to evaluate microarrays designed by PICKY, an oligo microarray design software tool, and to test a direct microarray calibration method based on the PICKY-predicted, thermodynamically closest nontarget information. The complete set of microarray experiment results is archived in the GEO database with series accession number GSE14717. Additional data files and Perl programs described in this paper can be obtained from the website http://www.complex.iastate.edu under the PICKY Download area. Conclusion PICKY-designed microarray probes are highly reliable over a wide range of hybridization temperatures and sample concentrations. The microarray calibration method reported here allows researchers to experimentally optimize their hybridization conditions. Because this method is straightforward, uses existing microarrays and relatively inexpensive synthesized samples, it can be used by any lab that uses microarrays designed by PICKY. In addition, other microarrays can be reanalyzed by PICKY to obtain the thermodynamically closest nontarget information for calibration.

  12. Current Knowledge on Microarray Technology - An Overview

    African Journals Online (AJOL)

    Erah

    This paper reviews basics and updates of each microarray technology and serves to .... through protein microarrays. Protein microarrays also known as protein chips are nothing but grids that ... conditioned media, patient sera, plasma and urine. Clontech ... based antibody arrays) is similar to membrane-based antibody ...

  13. Diagnostic and analytical applications of protein microarrays

    DEFF Research Database (Denmark)

    Dufva, Hans Martin; Christensen, C.B.V.

    2005-01-01

    DNA microarrays have changed the field of biomedical sciences over the past 10 years. For several reasons, antibody and other protein microarrays have not developed at the same rate. However, protein and antibody arrays have emerged as a powerful tool to complement DNA microarrays during the post...

  14. Experience with a fuel rod enrichment scanner

    International Nuclear Information System (INIS)

    Kubik, R.N.; Pettus, W.G.

    1975-01-01

    This enrichment scanner views all fuel rods produced at B and W's Commercial Nuclear Fuel Plant. The scanner design is derived from the PAPAS System reported by R. A. Forster, H. D. Menlove, and their associates at Los Alamos. The spatial resolution of the system and smoothing of the data are discussed in detail. The cost-effectiveness of multi-detector versus single detector scanners of this general design is also discussed

  15. Long-Range WindScanner System

    DEFF Research Database (Denmark)

    Vasiljevic, Nikola; Lea, Guillaume; Courtney, Michael

    2016-01-01

    The technical aspects of a multi-Doppler LiDAR instrument, the long-range WindScanner system, are presented accompanied by an overview of the results from several field campaigns. The long-range WindScanner system consists of three spatially-separated, scanning coherent Doppler LiDARs and a remote......-rangeWindScanner system measures the wind field by emitting and directing three laser beams to intersect, and then scanning the beam intersection over a region of interest. The long-range WindScanner system was developed to tackle the need for high-quality observations of wind fields on scales of modern wind turbine...

  16. Robotic Prostate Biopsy in Closed MRI Scanner

    National Research Council Canada - National Science Library

    Fischer, Gregory

    2008-01-01

    .... This work enables prostate brachytherapy and biopsy procedures in standard high-field diagnostic MRI scanners through the development of a robotic needle placement device specifically designed...

  17. Distributed Parallel Endmember Extraction of Hyperspectral Data Based on Spark

    Directory of Open Access Journals (Sweden)

    Zebin Wu

    2016-01-01

    Full Text Available Due to the increasing dimensionality and volume of remotely sensed hyperspectral data, the development of acceleration techniques for massive hyperspectral image analysis approaches is a very important challenge. Cloud computing offers many possibilities of distributed processing of hyperspectral datasets. This paper proposes a novel distributed parallel endmember extraction method based on iterative error analysis that utilizes cloud computing principles to efficiently process massive hyperspectral data. The proposed method takes advantage of technologies including MapReduce programming model, Hadoop Distributed File System (HDFS, and Apache Spark to realize distributed parallel implementation for hyperspectral endmember extraction, which significantly accelerates the computation of hyperspectral processing and provides high throughput access to large hyperspectral data. The experimental results, which are obtained by extracting endmembers of hyperspectral datasets on a cloud computing platform built on a cluster, demonstrate the effectiveness and computational efficiency of the proposed method.

  18. Manifold learning based feature extraction for classification of hyperspectral data

    CSIR Research Space (South Africa)

    Lunga, D

    2014-01-01

    Full Text Available in analysis of hyperspectral imagery. High spectral resolution and the typically continuous bands of hyperspectral image (HSI) data enable discrimination between spectrally similar targets of interest, provide capability to estimate within pixel abundances...

  19. Portable optical fiber probe-based spectroscopic scanner for rapid cancer diagnosis: a new tool for intraoperative margin assessment.

    Directory of Open Access Journals (Sweden)

    Niyom Lue

    Full Text Available There continues to be a significant clinical need for rapid and reliable intraoperative margin assessment during cancer surgery. Here we describe a portable, quantitative, optical fiber probe-based, spectroscopic tissue scanner designed for intraoperative diagnostic imaging of surgical margins, which we tested in a proof of concept study in human tissue for breast cancer diagnosis. The tissue scanner combines both diffuse reflectance spectroscopy (DRS and intrinsic fluorescence spectroscopy (IFS, and has hyperspectral imaging capability, acquiring full DRS and IFS spectra for each scanned image pixel. Modeling of the DRS and IFS spectra yields quantitative parameters that reflect the metabolic, biochemical and morphological state of tissue, which are translated into disease diagnosis. The tissue scanner has high spatial resolution (0.25 mm over a wide field of view (10 cm × 10 cm, and both high spectral resolution (2 nm and high spectral contrast, readily distinguishing tissues with widely varying optical properties (bone, skeletal muscle, fat and connective tissue. Tissue-simulating phantom experiments confirm that the tissue scanner can quantitatively measure spectral parameters, such as hemoglobin concentration, in a physiologically relevant range with a high degree of accuracy (<5% error. Finally, studies using human breast tissues showed that the tissue scanner can detect small foci of breast cancer in a background of normal breast tissue. This tissue scanner is simpler in design, images a larger field of view at higher resolution and provides a more physically meaningful tissue diagnosis than other spectroscopic imaging systems currently reported in literatures. We believe this spectroscopic tissue scanner can provide real-time, comprehensive diagnostic imaging of surgical margins in excised tissues, overcoming the sampling limitation in current histopathology margin assessment. As such it is a significant step in the development of a

  20. Multi- and hyperspectral scene modeling

    Science.gov (United States)

    Borel, Christoph C.; Tuttle, Ronald F.

    2011-06-01

    This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.

  1. MiMiR: a comprehensive solution for storage, annotation and exchange of microarray data

    Directory of Open Access Journals (Sweden)

    Rahman Fatimah

    2005-11-01

    Full Text Available Abstract Background The generation of large amounts of microarray data presents challenges for data collection, annotation, exchange and analysis. Although there are now widely accepted formats, minimum standards for data content and ontologies for microarray data, only a few groups are using them together to build and populate large-scale databases. Structured environments for data management are crucial for making full use of these data. Description The MiMiR database provides a comprehensive infrastructure for microarray data annotation, storage and exchange and is based on the MAGE format. MiMiR is MIAME-supportive, customised for use with data generated on the Affymetrix platform and includes a tool for data annotation using ontologies. Detailed information on the experiment, methods, reagents and signal intensity data can be captured in a systematic format. Reports screens permit the user to query the database, to view annotation on individual experiments and provide summary statistics. MiMiR has tools for automatic upload of the data from the microarray scanner and export to databases using MAGE-ML. Conclusion MiMiR facilitates microarray data management, annotation and exchange, in line with international guidelines. The database is valuable for underpinning research activities and promotes a systematic approach to data handling. Copies of MiMiR are freely available to academic groups under licence.

  2. Multi-parameter CAMAC compatible ADC scanner

    Energy Technology Data Exchange (ETDEWEB)

    Midttun, G J; Ingebretsen, F [Oslo Univ. (Norway). Fysisk Inst.; Johnsen, P J [Norsk Data A.S., Box 163, Oekern, Oslo 5, Norway

    1979-02-15

    A fast ADC scanner for multi-parameter nuclear physics experiments is described. The scanner is based on a standard CAMAC crate, and data from several different experiments can be handled simultaneously through a direct memory access (DMA) channel. The implementation on a PDP-7 computer is outlined.

  3. 3D whole body scanners revisited

    NARCIS (Netherlands)

    Daanen, H.A.M.; Haar, F.B. ter

    2013-01-01

    An overview of whole body scanners in 1998 (H.A.M. Daanen, G.J. Van De Water. Whole body scanners, Displays 19 (1998) 111-120) shortly after they emerged to the market revealed that the systems were bulky, slow, expensive and low in resolution. This update shows that new developments in sensing and

  4. Hyperspectral imaging of colonic polyps in vivo (Conference Presentation)

    Science.gov (United States)

    Clancy, Neil T.; Elson, Daniel S.; Teare, Julian

    2017-02-01

    Standard endoscopic tools restrict clinicians to making subjective visual assessments of lesions detected in the bowel, with classification results depending strongly on experience level and training. Histological examination of resected tissue remains the diagnostic gold standard, meaning that all detected lesions are routinely removed. This subjects the patient to risk of polypectomy-related injury, and places significant workload and economic burdens on the hospital. An objective endoscopic classification method would allow hyperplastic polyps, with no malignant potential, to be left in situ, or low grade adenomas to be resected and discarded without histology. A miniature multimodal flexible endoscope is proposed to obtain hyperspectral reflectance and dual excitation autofluorescence information from polyps in vivo. This is placed inside the working channel of a conventional colonoscope, with the external scanning and detection optics on a bedside trolley. A blue and violet laser diode pair excite endogenous fluorophores in the respiration chain, while the colonoscope's xenon light source provides broadband white light for diffuse reflectance measurements. A push-broom HSI scanner collects the hypercube. System characterisation experiments are presented, defining resolution limits as well as acquisition settings for optimal spectral, spatial and temporal performance. The first in vivo results in human subjects are presented, demonstrating the clinical utility of the device. The optical properties (reflectance and autofluorescence) of imaged polyps are quantified and compared to the histologically-confirmed tissue type as well as the clinician's visual assessment. Further clinical studies will allow construction of a full robust training dataset for development of classification schemes.

  5. Design issues in toxicogenomics using DNA microarray experiment

    International Nuclear Information System (INIS)

    Lee, Kyoung-Mu; Kim, Ju-Han; Kang, Daehee

    2005-01-01

    The methods of toxicogenomics might be classified into omics study (e.g., genomics, proteomics, and metabolomics) and population study focusing on risk assessment and gene-environment interaction. In omics study, microarray is the most popular approach. Genes falling into several categories (e.g., xenobiotics metabolism, cell cycle control, DNA repair etc.) can be selected up to 20,000 according to a priori hypothesis. The appropriate type of samples and species should be selected in advance. Multiple doses and varied exposure durations are suggested to identify those genes clearly linked to toxic response. Microarray experiments can be affected by numerous nuisance variables including experimental designs, sample extraction, type of scanners, etc. The number of slides might be determined from the magnitude and variance of expression change, false-positive rate, and desired power. Instead, pooling samples is an alternative. Online databases on chemicals with known exposure-disease outcomes and genetic information can aid the interpretation of the normalized results. Gene function can be inferred from microarray data analyzed by bioinformatics methods such as cluster analysis. The population study often adopts hospital-based or nested case-control design. Biases in subject selection and exposure assessment should be minimized, and confounding bias should also be controlled for in stratified or multiple regression analysis. Optimal sample sizes are dependent on the statistical test for gene-to-environment or gene-to-gene interaction. The design issues addressed in this mini-review are crucial in conducting toxicogenomics study. In addition, integrative approach of exposure assessment, epidemiology, and clinical trial is required

  6. Hyperspectral remote sensing of plant pigments.

    Science.gov (United States)

    Blackburn, George Alan

    2007-01-01

    The dynamics of pigment concentrations are diagnostic of a range of plant physiological properties and processes. This paper appraises the developing technologies and analytical methods for quantifying pigments non-destructively and repeatedly across a range of spatial scales using hyperspectral remote sensing. Progress in deriving predictive relationships between various characteristics and transforms of hyperspectral reflectance data are evaluated and the roles of leaf and canopy radiative transfer models are reviewed. Requirements are identified for more extensive intercomparisons of different approaches and for further work on the strategies for interpreting canopy scale data. The paper examines the prospects for extending research to the wider range of pigments in addition to chlorophyll, testing emerging methods of hyperspectral analysis and exploring the fusion of hyperspectral and LIDAR remote sensing. In spite of these opportunities for further development and the refinement of techniques, current evidence of an expanding range of applications in the ecophysiological, environmental, agricultural, and forestry sciences highlights the growing value of hyperspectral remote sensing of plant pigments.

  7. Design and Test of Portable Hyperspectral Imaging Spectrometer

    Directory of Open Access Journals (Sweden)

    Chunbo Zou

    2017-01-01

    Full Text Available We design and implement a portable hyperspectral imaging spectrometer, which has high spectral resolution, high spatial resolution, small volume, and low weight. The flight test has been conducted, and the hyperspectral images are acquired successfully. To achieve high performance, small volume, and regular appearance, an improved Dyson structure is designed and used in the hyperspectral imaging spectrometer. The hyperspectral imaging spectrometer is suitable for the small platform such as CubeSat and UAV (unmanned aerial vehicle, and it is also convenient to use for hyperspectral imaging acquiring in the laboratory and the field.

  8. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  9. Hyperspectral imaging and its applications

    Science.gov (United States)

    Serranti, S.; Bonifazi, G.

    2016-04-01

    Hyperspectral imaging (HSI) is an emerging technique that combines the imaging properties of a digital camera with the spectroscopic properties of a spectrometer able to detect the spectral attributes of each pixel in an image. For these characteristics, HSI allows to qualitatively and quantitatively evaluate the effects of the interactions of light with organic and/or inorganic materials. The results of this interaction are usually displayed as a spectral signature characterized by a sequence of energy values, in a pre-defined wavelength interval, for each of the investigated/collected wavelength. Following this approach, it is thus possible to collect, in a fast and reliable way, spectral information that are strictly linked to chemical-physical characteristics of the investigated materials and/or products. Considering that in an hyperspectral image the spectrum of each pixel can be analyzed, HSI can be considered as one of the best nondestructive technology allowing to perform the most accurate and detailed information extraction. HSI can be applied in different wavelength fields, the most common are the visible (VIS: 400-700 nm), the near infrared (NIR: 1000-1700 nm) and the short wave infrared (SWIR: 1000-2500 nm). It can be applied for inspections from micro- to macro-scale, up to remote sensing. HSI produces a large amount of information due to the great number of continuous collected spectral bands. Such an approach, when successful, is quite challenging being usually reliable, robust and characterized by lower costs, if compared with those usually associated to commonly applied analytical off-line and/or on-line analytical approaches. More and more applications have been thus developed and tested, in these last years, especially in food inspection, with a large range of investigated products, such as fruits and vegetables, meat, fish, eggs and cereals, but also in medicine and pharmaceutical sector, in cultural heritage, in material characterization and in

  10. Three-dimensional rectilinear scanner

    International Nuclear Information System (INIS)

    O'Neill, W.J.; Strange, D.R.; Miller, A.

    1976-01-01

    A rectilinear scanner for detecting radiation in a plurality of channels utilizing a collimator is described. Each of the channels receives information from a different portion of the collimator. Information separately received is separately messaged and later collated to present a common image. The information is processed by apparatus in a data processing system. This system has means for messaging analog signals corresponding to gamma radiation counts and converting such analog signals to digital signals. This system has means interfacing the digital signals into an address register that communicates directly via data busses to core memory of a central processing unit by cycle stealing and deriving clinically significant information by computation on the resultant digital data. This system has means for storing, retrieving, and displaying the resultant digital data and the resultant derivations therefrom collectively. This is done in such a manner as to allow time sequencing of the aforementioned operations such that the aforementioned operations can be interleaved on a real time basis. 13 claims, 44 figures

  11. The hyperspectral imaging trade-off

    DEFF Research Database (Denmark)

    Carstensen, Jens Michael

    , this will be the standard situation, and it enables the detection of small spectral features like peaks, valleys and shoulders for a wide range of chemistries. Everything else being equal this is what you would wish for, and hyperspectral imaging is often used in research and in remote sensing because of the needs and cost......Although it has no clear-cut definition, hyperspectral imaging in the UV-Visible-NIR wavelength region seems to mean spectral image sampling in bands from 10 nm width or narrower that enables spectral reconstruction over some wavelength interval. For non-imaging spectral applications...... structures in these projects. However, hyperspectral imaging is a sampling choice within spectral imaging that typically will impose some trade-offs, and these trade-offs will not be optimal for many applications. The purpose of this presentation is to point out and increase the awareness of these trade...

  12. Hyperspectral analysis of clay minerals

    Science.gov (United States)

    Janaki Rama Suresh, G.; Sreenivas, K.; Sivasamy, R.

    2014-11-01

    A study was carried out by collecting soil samples from parts of Gwalior and Shivpuri district, Madhya Pradesh in order to assess the dominant clay mineral of these soils using hyperspectral data, as 0.4 to 2.5 μm spectral range provides abundant and unique information about many important earth-surface minerals. Understanding the spectral response along with the soil chemical properties can provide important clues for retrieval of mineralogical soil properties. The soil samples were collected based on stratified random sampling approach and dominant clay minerals were identified through XRD analysis. The absorption feature parameters like depth, width, area and asymmetry of the absorption peaks were derived from spectral profile of soil samples through DISPEC tool. The derived absorption feature parameters were used as inputs for modelling the dominant soil clay mineral present in the unknown samples using Random forest approach which resulted in kappa accuracy of 0.795. Besides, an attempt was made to classify the Hyperion data using Spectral Angle Mapper (SAM) algorithm with an overall accuracy of 68.43 %. Results showed that kaolinite was the dominant mineral present in the soils followed by montmorillonite in the study area.

  13. Manifold structure preservative for hyperspectral target detection

    Science.gov (United States)

    Imani, Maryam

    2018-05-01

    A nonparametric method termed as manifold structure preservative (MSP) is proposed in this paper for hyperspectral target detection. MSP transforms the feature space of data to maximize the separation between target and background signals. Moreover, it minimizes the reconstruction error of targets and preserves the topological structure of data in the projected feature space. MSP does not need to consider any distribution for target and background data. So, it can achieve accurate results in real scenarios due to avoiding unreliable assumptions. The proposed MSP detector is compared to several popular detectors and the experiments on a synthetic data and two real hyperspectral images indicate the superior ability of it in target detection.

  14. "Harshlighting" small blemishes on microarrays

    Directory of Open Access Journals (Sweden)

    Wittkowski Knut M

    2005-03-01

    Full Text Available Abstract Background Microscopists are familiar with many blemishes that fluorescence images can have due to dust and debris, glass flaws, uneven distribution of fluids or surface coatings, etc. Microarray scans show similar artefacts, which affect the analysis, particularly when one tries to detect subtle changes. However, most blemishes are hard to find by the unaided eye, particularly in high-density oligonucleotide arrays (HDONAs. Results We present a method that harnesses the statistical power provided by having several HDONAs available, which are obtained under similar conditions except for the experimental factor. This method "harshlights" blemishes and renders them evident. We find empirically that about 25% of our chips are blemished, and we analyze the impact of masking them on screening for differentially expressed genes. Conclusion Experiments attempting to assess subtle expression changes should be carefully screened for blemishes on the chips. The proposed method provides investigators with a novel robust approach to improve the sensitivity of microarray analyses. By utilizing topological information to identify and mask blemishes prior to model based analyses, the method prevents artefacts from confounding the process of background correction, normalization, and summarization.

  15. Product development of Indian cargo scanner

    International Nuclear Information System (INIS)

    2017-01-01

    A cargo scanner is required for nonintrusive screening of suspected cargo containers in trade, using high energy X-ray, to detect any mis-declarations, contraband goods concealment or hidden ammunition or explosives. The cargo scanners help authorities to process large number of suspected cargo with a high level of confidence with other additional benefit of faster clearance, minimised intrusive inspection and generating secured digital record of the process. BARC is in process of developing Indian Cargo Scanner with indigenous X-ray source. Proof of concept and conformance of the results to the international standards has been demonstrated in laboratory. Full scale equipment named as Portal scanner shall be demonstrated at Gamma field Trombay in year 2017. Subsequently the technology transfer may be done to a suitable Indian vendor

  16. A Cross-Platform Smartphone Brain Scanner

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Stopczynski, Arkadiusz; Stahlhut, Carsten

    We describe a smartphone brain scanner with a low-costwireless 14-channel Emotiv EEG neuroheadset interfacingwith multiple mobile devices. This personal informaticssystem enables minimally invasive and continuouscapturing of brain imaging data in natural settings. Thesystem applies an inverse...

  17. Identification of invasive and expansive plant species based on airborne hyperspectral and ALS data

    Science.gov (United States)

    Szporak-Wasilewska, Sylwia; Kuc, Gabriela; Jóźwiak, Jacek; Demarchi, Luca; Chormański, Jarosław; Marcinkowska-Ochtyra, Adriana; Ochtyra, Adrian; Jarocińska, Anna; Sabat, Anita; Zagajewski, Bogdan; Tokarska-Guzik, Barbara; Bzdęga, Katarzyna; Pasierbiński, Andrzej; Fojcik, Barbara; Jędrzejczyk-Korycińska, Monika; Kopeć, Dominik; Wylazłowska, Justyna; Woziwoda, Beata; Michalska-Hejduk, Dorota; Halladin-Dąbrowska, Anna

    2017-04-01

    The aim of Natura 2000 network is to ensure the long term survival of most valuable and threatened species and habitats in Europe. The encroachment of invasive alien and expansive native plant species is among the most essential threat that can cause significant damage to protected habitats and their biodiversity. The phenomenon requires comprehensive and efficient repeatable solutions that can be applied to various areas in order to assess the impact on habitats. The aim of this study is to investigate of the issue of invasive and expansive plant species as they affect protected areas at a larger scale of Natura 2000 network in Poland. In order to determine the scale of the problem we have been developing methods of identification of invasive and expansive species and then detecting their occurrence and mapping their distribution in selected protected areas within Natura 2000 network using airborne hyperspectral and airborne laser scanning data. The aerial platform used consists of hyperspectral HySpex scanner (451 bands in VNIR and SWIR), Airborne Laser Scanner (FWF) Riegl Lite Mapper and RGB camera. It allowed to obtain simultaneous 1 meter resolution hyperspectral image, 0.1 m resolution orthophotomaps and point cloud data acquired with 7 points/m2. Airborne images were acquired three times per year during growing season to account for plant seasonal change (in May/June, July/August and September/October 2016). The hyperspectral images were radiometrically, geometrically and atmospherically corrected. Atmospheric correction was performed and validated using ASD FieldSpec 4 measurements. ALS point cloud data were used to generate several different topographic, vegetation and intensity products with 1 m spatial resolution. Acquired data (both hyperspectral and ALS) were used to test different classification methods including Mixture Tuned Matched Filtering (MTMF), Spectral Angle Mapper (SAM), Random Forest (RF), Support Vector Machines (SVM), among others

  18. Accurate detection of carcinoma cells by use of a cell microarray chip.

    Directory of Open Access Journals (Sweden)

    Shohei Yamamura

    Full Text Available BACKGROUND: Accurate detection and analysis of circulating tumor cells plays an important role in the diagnosis and treatment of metastatic cancer treatment. METHODS AND FINDINGS: A cell microarray chip was used to detect spiked carcinoma cells among leukocytes. The chip, with 20,944 microchambers (105 µm width and 50 µm depth, was made from polystyrene; and the formation of monolayers of leukocytes in the microchambers was observed. Cultured human T lymphoblastoid leukemia (CCRF-CEM cells were used to examine the potential of the cell microarray chip for the detection of spiked carcinoma cells. A T lymphoblastoid leukemia suspension was dispersed on the chip surface, followed by 15 min standing to allow the leukocytes to settle down into the microchambers. Approximately 29 leukocytes were found in each microchamber when about 600,000 leukocytes in total were dispersed onto a cell microarray chip. Similarly, when leukocytes isolated from human whole blood were used, approximately 89 leukocytes entered each microchamber when about 1,800,000 leukocytes in total were placed onto the cell microarray chip. After washing the chip surface, PE-labeled anti-cytokeratin monoclonal antibody and APC-labeled anti-CD326 (EpCAM monoclonal antibody solution were dispersed onto the chip surface and allowed to react for 15 min; and then a microarray scanner was employed to detect any fluorescence-positive cells within 20 min. In the experiments using spiked carcinoma cells (NCI-H1650, 0.01 to 0.0001%, accurate detection of carcinoma cells was achieved with PE-labeled anti-cytokeratin monoclonal antibody. Furthermore, verification of carcinoma cells in the microchambers was performed by double staining with the above monoclonal antibodies. CONCLUSION: The potential application of the cell microarray chip for the detection of CTCs was shown, thus demonstrating accurate detection by double staining for cytokeratin and EpCAM at the single carcinoma cell level.

  19. Advanced microarray technologies for clinical diagnostics

    NARCIS (Netherlands)

    Pierik, Anke

    2011-01-01

    DNA microarrays become increasingly important in the field of clinical diagnostics. These microarrays, also called DNA chips, are small solid substrates, typically having a maximum surface area of a few cm2, onto which many spots are arrayed in a pre-determined pattern. Each of these spots contains

  20. Hyperspectral remote sensing of postfire soil properties

    Science.gov (United States)

    Sarah A. Lewis; Peter R. Robichaud; William J. Elliot; Bruce E. Frazier; Joan Q. Wu

    2004-01-01

    Forest fires may induce changes in soil organic properties that often lead to water repellent conditions within the soil profile that decrease soil infiltration capacity. The remote detection of water repellent soils after forest fires would lead to quicker and more accurate assessment of erosion potential. An airborne hyperspectral image was acquired over the Hayman...

  1. Demystifying autofluorescence with excitation scanning hyperspectral imaging

    Science.gov (United States)

    Deal, Joshua; Harris, Bradley; Martin, Will; Lall, Malvika; Lopez, Carmen; Rider, Paul; Boudreaux, Carole; Rich, Thomas; Leavesley, Silas J.

    2018-02-01

    Autofluorescence has historically been considered a nuisance in medical imaging. Many endogenous fluorophores, specifically, collagen, elastin, NADH, and FAD, are found throughout the human body. Diagnostically, these signals can be prohibitive since they can outcompete signals introduced for diagnostic purposes. Recent advances in hyperspectral imaging have allowed the acquisition of significantly more data in a shorter time period by scanning the excitation spectra of fluorophores. The reduced acquisition time and increased signal-to-noise ratio allow for separation of significantly more fluorophores than previously possible. Here, we propose to utilize excitation-scanning of autofluorescence to examine tissues and diagnose pathologies. Spectra of autofluorescent molecules were obtained using a custom inverted microscope (TE-2000, Nikon Instruments) with a Xe arc lamp and thin film tunable filter array (VersaChrome, Semrock, Inc.) Scans utilized excitation wavelengths from 360 nm to 550 nm in 5 nm increments. The resultant spectra were used to examine hyperspectral image stacks from various collaborative studies, including an atherosclerotic rat model and a colon cancer study. Hyperspectral images were analyzed with ENVI and custom Matlab scripts including linear spectral unmixing (LSU) and principal component analysis (PCA). Initial results suggest the ability to separate the signals of endogenous fluorophores and measure the relative concentrations of fluorophores among healthy and diseased states of similar tissues. These results suggest pathology-specific changes to endogenous fluorophores can be detected using excitationscanning hyperspectral imaging. Future work will expand the library of pure molecules and will examine more defined disease states.

  2. Hyperspectral data exploitation theory and applications

    CERN Document Server

    Chang, Chein-I

    2007-01-01

    Authored by a panel of experts in the field, this book focuses on hyperspectral image analysis, systems, and applications. With discussion of application-based projects and case studies, this professional reference will bring you up-to-date on this pervasive technology, wether you are working in the military and defense fields, or in remote sensing technology, geoscience, or agriculture.

  3. Airborne hyperspectral remote sensing in Italy

    Science.gov (United States)

    Bianchi, Remo; Marino, Carlo M.; Pignatti, Stefano

    1994-12-01

    The Italian National Research Council (CNR) in the framework of its `Strategic Project for Climate and Environment in Southern Italy' established a new laboratory for airborne hyperspectral imaging devoted to environmental problems. Since the end of June 1994, the LARA (Laboratorio Aereo per Ricerche Ambientali -- Airborne Laboratory for Environmental Studies) Project is fully operative to provide hyperspectral data to the national and international scientific community by means of deployments of its CASA-212 aircraft carrying the Daedalus AA5000 MIVIS (multispectral infrared and visible imaging spectrometer) system. MIVIS is a modular instrument consisting of 102 spectral channels that use independent optical sensors simultaneously sampled and recorded onto a compact computer compatible magnetic tape medium with a data capacity of 10.2 Gbytes. To support the preprocessing and production pipeline of the large hyperspectral data sets CNR housed in Pomezia, a town close to Rome, a ground based computer system with a software designed to handle MIVIS data. The software (MIDAS-Multispectral Interactive Data Analysis System), besides the data production management, gives to users a powerful and highly extensible hyperspectral analysis system. The Pomezia's ground station is designed to maintain and check the MIVIS instrument performance through the evaluation of data quality (like spectral accuracy, signal to noise performance, signal variations, etc.), and to produce, archive, and diffuse MIVIS data in the form of geometrically and radiometrically corrected data sets on low cost and easy access CC media.

  4. Novel hyperspectral prediction method and apparatus

    Science.gov (United States)

    Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf

    2009-05-01

    Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.

  5. Hyperspectral signature analysis of skin parameters

    Science.gov (United States)

    Vyas, Saurabh; Banerjee, Amit; Garza, Luis; Kang, Sewon; Burlina, Philippe

    2013-02-01

    The temporal analysis of changes in biological skin parameters, including melanosome concentration, collagen concentration and blood oxygenation, may serve as a valuable tool in diagnosing the progression of malignant skin cancers and in understanding the pathophysiology of cancerous tumors. Quantitative knowledge of these parameters can also be useful in applications such as wound assessment, and point-of-care diagnostics, amongst others. We propose an approach to estimate in vivo skin parameters using a forward computational model based on Kubelka-Munk theory and the Fresnel Equations. We use this model to map the skin parameters to their corresponding hyperspectral signature. We then use machine learning based regression to develop an inverse map from hyperspectral signatures to skin parameters. In particular, we employ support vector machine based regression to estimate the in vivo skin parameters given their corresponding hyperspectral signature. We build on our work from SPIE 2012, and validate our methodology on an in vivo dataset. This dataset consists of 241 signatures collected from in vivo hyperspectral imaging of patients of both genders and Caucasian, Asian and African American ethnicities. In addition, we also extend our methodology past the visible region and through the short-wave infrared region of the electromagnetic spectrum. We find promising results when comparing the estimated skin parameters to the ground truth, demonstrating good agreement with well-established physiological precepts. This methodology can have potential use in non-invasive skin anomaly detection and for developing minimally invasive pre-screening tools.

  6. Sparse-Based Modeling of Hyperspectral Data

    DEFF Research Database (Denmark)

    Calvini, Rosalba; Ulrici, Alessandro; Amigo Rubio, Jose Manuel

    2016-01-01

    One of the main issues of hyperspectral imaging data is to unravel the relevant, yet overlapped, huge amount of information contained in the spatial and spectral dimensions. When dealing with the application of multivariate models in such high-dimensional data, sparsity can improve...

  7. Hyperspectral Unmixing with Robust Collaborative Sparse Regression

    Directory of Open Access Journals (Sweden)

    Chang Li

    2016-07-01

    Full Text Available Recently, sparse unmixing (SU of hyperspectral data has received particular attention for analyzing remote sensing images. However, most SU methods are based on the commonly admitted linear mixing model (LMM, which ignores the possible nonlinear effects (i.e., nonlinearity. In this paper, we propose a new method named robust collaborative sparse regression (RCSR based on the robust LMM (rLMM for hyperspectral unmixing. The rLMM takes the nonlinearity into consideration, and the nonlinearity is merely treated as outlier, which has the underlying sparse property. The RCSR simultaneously takes the collaborative sparse property of the abundance and sparsely distributed additive property of the outlier into consideration, which can be formed as a robust joint sparse regression problem. The inexact augmented Lagrangian method (IALM is used to optimize the proposed RCSR. The qualitative and quantitative experiments on synthetic datasets and real hyperspectral images demonstrate that the proposed RCSR is efficient for solving the hyperspectral SU problem compared with the other four state-of-the-art algorithms.

  8. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  9. HIGH RESOLUTION AIRBORNE LASER SCANNING AND HYPERSPECTRAL IMAGING WITH A SMALL UAV PLATFORM

    Directory of Open Access Journals (Sweden)

    M. Gallay

    2016-06-01

    Full Text Available The capabilities of unmanned airborne systems (UAS have become diverse with the recent development of lightweight remote sensing instruments. In this paper, we demonstrate our custom integration of the state-of-the-art technologies within an unmanned aerial platform capable of high-resolution and high-accuracy laser scanning, hyperspectral imaging, and photographic imaging. The technological solution comprises the latest development of a completely autonomous, unmanned helicopter by Aeroscout, the Scout B1-100 UAV helicopter. The helicopter is powered by a gasoline two-stroke engine and it allows for integrating 18 kg of a customized payload unit. The whole system is modular providing flexibility of payload options, which comprises the main advantage of the UAS. The UAS integrates two kinds of payloads which can be altered. Both payloads integrate a GPS/IMU with a dual GPS antenna configuration provided by OXTS for accurate navigation and position measurements during the data acquisition. The first payload comprises a VUX-1 laser scanner by RIEGL and a Sony A6000 E-Mount photo camera. The second payload for hyperspectral scanning integrates a push-broom imager AISA KESTREL 10 by SPECIM. The UAS was designed for research of various aspects of landscape dynamics (landslides, erosion, flooding, or phenology in high spectral and spatial resolution.

  10. Carbohydrate Microarrays in Plant Science

    DEFF Research Database (Denmark)

    Fangel, Jonatan Ulrik; Pedersen, H.L.; Vidal-Melgosa, S.

    2012-01-01

    Almost all plant cells are surrounded by glycan-rich cell walls, which form much of the plant body and collectively are the largest source of biomass on earth. Plants use polysaccharides for support, defense, signaling, cell adhesion, and as energy storage, and many plant glycans are also important...... industrially and nutritionally. Understanding the biological roles of plant glycans and the effective exploitation of their useful properties requires a detailed understanding of their structures, occurrence, and molecular interactions. Microarray technology has revolutionized the massively high...... for plant research and can be used to map glycan populations across large numbers of samples to screen antibodies, carbohydrate binding proteins, and carbohydrate binding modules and to investigate enzyme activities....

  11. The EADGENE Microarray Data Analysis Workshop

    DEFF Research Database (Denmark)

    de Koning, Dirk-Jan; Jaffrézic, Florence; Lund, Mogens Sandø

    2007-01-01

    Microarray analyses have become an important tool in animal genomics. While their use is becoming widespread, there is still a lot of ongoing research regarding the analysis of microarray data. In the context of a European Network of Excellence, 31 researchers representing 14 research groups from...... 10 countries performed and discussed the statistical analyses of real and simulated 2-colour microarray data that were distributed among participants. The real data consisted of 48 microarrays from a disease challenge experiment in dairy cattle, while the simulated data consisted of 10 microarrays...... statistical weights, to omitting a large number of spots or omitting entire slides. Surprisingly, these very different approaches gave quite similar results when applied to the simulated data, although not all participating groups analysed both real and simulated data. The workshop was very successful...

  12. Mapping Soil Organic Matter with Hyperspectral Imaging

    Science.gov (United States)

    Moni, Christophe; Burud, Ingunn; Flø, Andreas; Rasse, Daniel

    2014-05-01

    Soil organic matter (SOM) plays a central role for both food security and the global environment. Soil organic matter is the 'glue' that binds soil particles together, leading to positive effects on soil water and nutrient availability for plant growth and helping to counteract the effects of erosion, runoff, compaction and crusting. Hyperspectral measurements of samples of soil profiles have been conducted with the aim of mapping soil organic matter on a macroscopic scale (millimeters and centimeters). Two soil profiles have been selected from the same experimental site, one from a plot amended with biochar and another one from a control plot, with the specific objective to quantify and map the distribution of biochar in the amended profile. The soil profiles were of size (30 x 10 x 10) cm3 and were scanned with two pushbroomtype hyperspectral cameras, one which is sensitive in the visible wavelength region (400 - 1000 nm) and one in the near infrared region (1000 - 2500 nm). The images from the two detectors were merged together into one full dataset covering the whole wavelength region. Layers of 15 mm were removed from the 10 cm high sample such that a total of 7 hyperspectral images were obtained from the samples. Each layer was analyzed with multivariate statistical techniques in order to map the different components in the soil profile. Moreover, a 3-dimensional visalization of the components through the depth of the sample was also obtained by combining the hyperspectral images from all the layers. Mid-infrared spectroscopy of selected samples of the measured soil profiles was conducted in order to correlate the chemical constituents with the hyperspectral results. The results show that hyperspectral imaging is a fast, non-destructive technique, well suited to characterize soil profiles on a macroscopic scale and hence to map elements and different organic matter quality present in a complete pedon. As such, we were able to map and quantify biochar in our

  13. Illumination compensation in ground based hyperspectral imaging

    Science.gov (United States)

    Wendel, Alexander; Underwood, James

    2017-07-01

    Hyperspectral imaging has emerged as an important tool for analysing vegetation data in agricultural applications. Recently, low altitude and ground based hyperspectral imaging solutions have come to the fore, providing very high resolution data for mapping and studying large areas of crops in detail. However, these platforms introduce a unique set of challenges that need to be overcome to ensure consistent, accurate and timely acquisition of data. One particular problem is dealing with changes in environmental illumination while operating with natural light under cloud cover, which can have considerable effects on spectral shape. In the past this has been commonly achieved by imaging known reference targets at the time of data acquisition, direct measurement of irradiance, or atmospheric modelling. While capturing a reference panel continuously or very frequently allows accurate compensation for illumination changes, this is often not practical with ground based platforms, and impossible in aerial applications. This paper examines the use of an autonomous unmanned ground vehicle (UGV) to gather high resolution hyperspectral imaging data of crops under natural illumination. A process of illumination compensation is performed to extract the inherent reflectance properties of the crops, despite variable illumination. This work adapts a previously developed subspace model approach to reflectance and illumination recovery. Though tested on a ground vehicle in this paper, it is applicable to low altitude unmanned aerial hyperspectral imagery also. The method uses occasional observations of reference panel training data from within the same or other datasets, which enables a practical field protocol that minimises in-field manual labour. This paper tests the new approach, comparing it against traditional methods. Several illumination compensation protocols for high volume ground based data collection are presented based on the results. The findings in this paper are

  14. Investigation of Tree Spectral Reflectance Characteristics Using a Mobile Terrestrial Line Spectrometer and Laser Scanner

    Directory of Open Access Journals (Sweden)

    Eetu Puttonen

    2013-07-01

    Full Text Available In mobile terrestrial hyperspectral imaging, individual trees often present large variations in spectral reflectance that may impact the relevant applications, but the related studies have been seldom reported. To fill this gap, this study was dedicated to investigating the spectral reflectance characteristics of individual trees with a Sensei mobile mapping system, which comprises a Specim line spectrometer and an Ibeo Lux laser scanner. The addition of the latter unit facilitates recording the structural characteristics of the target trees synchronously, and this is beneficial for revealing the characteristics of the spatial distributions of tree spectral reflectance with variations at different levels. Then, the parts of trees with relatively low-level variations can be extracted. At the same time, since it is difficult to manipulate the whole spectrum, the traditional concept of vegetation indices (VI based on some particular spectral bands was taken into account here. Whether the assumed VIs capable of behaving consistently for the whole crown of each tree was also checked. The specific analyses were deployed based on four deciduous tree species and six kinds of VIs. The test showed that with the help of the laser scanner data, the parts of individual trees with relatively low-level variations can be located. Based on these parts, the relatively stable spectral reflectance characteristics for different tree species can be learnt.

  15. Neurosurgical operating computerized tomographic scanner system

    International Nuclear Information System (INIS)

    Okudera, Hiroshi; Sugita, Kenichiro; Kobayashi, Shigeaki; Kimishima, Sakae; Yoshida, Hisashi.

    1988-01-01

    A neurosurgical operating computerized tomography scanner system is presented. This system has been developed for obtaining intra- and postoperative CT images in the operating room. A TCT-300 scanner (manufactured by the Toshiba Co., Tokyo) is placed in the operating room. The realization of a true intraoperative CT image requires certain improvements in the CT scanner and operating table. To adjust the axis of the co-ordinates of the motor system of the MST-7000 microsurgical operating table (manufactured by the Mizuho Ika Co., Tokyo) to the CT scanner, we have designed an interface and a precise motor system so that the computer of the CT scanner can directly control the movement of the operating table. Furthermore, a new head-fixation system has been designed for producing artifact-free intraoperative CT images. The head-pins of the head-fixation system are made of carbon-fiber bars and titanium tips. A simulation study of the total system in the operating room with the CT scanner, operating table, and head holder using a skull model yielded a degree of error similar to that in the phantom testing of the original scanner. Three patients underwent resection of a glial tumor using this system. Intraoperative CT scans taken after dural opening showed a bulging of the cortex, a shift in the central structure, and a displacement of the cortical subarachnoid spaces under the influence of gravity. With a contrast medium the edge of the surrounding brain after resection was enhanced and the residual tumor mass was demonstrated clearly. This system makes it possible to obtain a noninvasive intraoperative image in a situation where structural shifts are taking place. (author)

  16. MEMS temperature scanner: principles, advances, and applications

    Science.gov (United States)

    Otto, Thomas; Saupe, Ray; Stock, Volker; Gessner, Thomas

    2010-02-01

    Contactless measurement of temperatures has gained enormous significance in many application fields, ranging from climate protection over quality control to object recognition in public places or military objects. Thereby measurement of linear or spatially temperature distribution is often necessary. For this purposes mostly thermographic cameras or motor driven temperature scanners are used today. Both are relatively expensive and the motor drive devices are limited regarding to the scanning rate additionally. An economic alternative are temperature scanner devices based on micro mirrors. The micro mirror, attached in a simple optical setup, reflects the emitted radiation from the observed heat onto an adapted detector. A line scan of the target object is obtained by periodic deflection of the micro scanner. Planar temperature distribution will be achieved by perpendicularly moving the target object or the scanner device. Using Planck radiation law the temperature of the object is calculated. The device can be adapted to different temperature ranges and resolution by using different detectors - cooled or uncooled - and parameterized scanner parameters. With the basic configuration 40 spatially distributed measuring points can be determined with temperatures in a range from 350°C - 1000°C. The achieved miniaturization of such scanners permits the employment in complex plants with high building density or in direct proximity to the measuring point. The price advantage enables a lot of applications, especially new application in the low-price market segment This paper shows principle, setup and application of a temperature measurement system based on micro scanners working in the near infrared range. Packaging issues and measurement results will be discussed as well.

  17. MARS: Microarray analysis, retrieval, and storage system

    Directory of Open Access Journals (Sweden)

    Scheideler Marcel

    2005-04-01

    Full Text Available Abstract Background Microarray analysis has become a widely used technique for the study of gene-expression patterns on a genomic scale. As more and more laboratories are adopting microarray technology, there is a need for powerful and easy to use microarray databases facilitating array fabrication, labeling, hybridization, and data analysis. The wealth of data generated by this high throughput approach renders adequate database and analysis tools crucial for the pursuit of insights into the transcriptomic behavior of cells. Results MARS (Microarray Analysis and Retrieval System provides a comprehensive MIAME supportive suite for storing, retrieving, and analyzing multi color microarray data. The system comprises a laboratory information management system (LIMS, a quality control management, as well as a sophisticated user management system. MARS is fully integrated into an analytical pipeline of microarray image analysis, normalization, gene expression clustering, and mapping of gene expression data onto biological pathways. The incorporation of ontologies and the use of MAGE-ML enables an export of studies stored in MARS to public repositories and other databases accepting these documents. Conclusion We have developed an integrated system tailored to serve the specific needs of microarray based research projects using a unique fusion of Web based and standalone applications connected to the latest J2EE application server technology. The presented system is freely available for academic and non-profit institutions. More information can be found at http://genome.tugraz.at.

  18. Annotating breast cancer microarray samples using ontologies

    Science.gov (United States)

    Liu, Hongfang; Li, Xin; Yoon, Victoria; Clarke, Robert

    2008-01-01

    As the most common cancer among women, breast cancer results from the accumulation of mutations in essential genes. Recent advance in high-throughput gene expression microarray technology has inspired researchers to use the technology to assist breast cancer diagnosis, prognosis, and treatment prediction. However, the high dimensionality of microarray experiments and public access of data from many experiments have caused inconsistencies which initiated the development of controlled terminologies and ontologies for annotating microarray experiments, such as the standard microarray Gene Expression Data (MGED) ontology (MO). In this paper, we developed BCM-CO, an ontology tailored specifically for indexing clinical annotations of breast cancer microarray samples from the NCI Thesaurus. Our research showed that the coverage of NCI Thesaurus is very limited with respect to i) terms used by researchers to describe breast cancer histology (covering 22 out of 48 histology terms); ii) breast cancer cell lines (covering one out of 12 cell lines); and iii) classes corresponding to the breast cancer grading and staging. By incorporating a wider range of those terms into BCM-CO, we were able to indexed breast cancer microarray samples from GEO using BCM-CO and MGED ontology and developed a prototype system with web interface that allows the retrieval of microarray data based on the ontology annotations. PMID:18999108

  19. Simulation of microarray data with realistic characteristics

    Directory of Open Access Journals (Sweden)

    Lehmussola Antti

    2006-07-01

    Full Text Available Abstract Background Microarray technologies have become common tools in biological research. As a result, a need for effective computational methods for data analysis has emerged. Numerous different algorithms have been proposed for analyzing the data. However, an objective evaluation of the proposed algorithms is not possible due to the lack of biological ground truth information. To overcome this fundamental problem, the use of simulated microarray data for algorithm validation has been proposed. Results We present a microarray simulation model which can be used to validate different kinds of data analysis algorithms. The proposed model is unique in the sense that it includes all the steps that affect the quality of real microarray data. These steps include the simulation of biological ground truth data, applying biological and measurement technology specific error models, and finally simulating the microarray slide manufacturing and hybridization. After all these steps are taken into account, the simulated data has realistic biological and statistical characteristics. The applicability of the proposed model is demonstrated by several examples. Conclusion The proposed microarray simulation model is modular and can be used in different kinds of applications. It includes several error models that have been proposed earlier and it can be used with different types of input data. The model can be used to simulate both spotted two-channel and oligonucleotide based single-channel microarrays. All this makes the model a valuable tool for example in validation of data analysis algorithms.

  20. A Gimbal-Stabilized Compact Hyperspectral Imaging System, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The Gimbal-stabilized Compact Hyperspectral Imaging System (GCHIS) fully integrates multi-sensor spectral imaging, stereovision, GPS and inertial measurement,...

  1. Radioactive cDNA microarray in neurospsychiatry

    International Nuclear Information System (INIS)

    Choe, Jae Gol; Shin, Kyung Ho; Lee, Min Soo; Kim, Meyoung Kon

    2003-01-01

    Microarray technology allows the simultaneous analysis of gene expression patterns of thousands of genes, in a systematic fashion, under a similar set of experimental conditions, thus making the data highly comparable. In some cases arrays are used simply as a primary screen leading to downstream molecular characterization of individual gene candidates. In other cases, the goal of expression profiling is to begin to identify complex regulatory networks underlying developmental processes and disease states. Microarrays were originally used with cell lines or other simple model systems. More recently, microarrays have been used in the analysis of more complex biological tissues including neural systems and the brain. The application of cDNA arrays in neuropsychiatry has lagged behind other fields for a number of reasons. These include a requirement for a large amount of input probe RNA in fluorescent-glass based array systems and the cellular complexity introduced by multicellular brain and neural tissues. An additional factor that impacts the general use of microarrays in neuropsychiatry is the lack of availability of sequenced clone sets from model systems. While human cDNA clones have been widely available, high quality rat, mouse, and drosophilae, among others are just becoming widely available. A final factor in the application of cDNA microarrays in neuropsychiatry is cost of commercial arrays. As academic microarray facilitates become more commonplace custom made arrays will become more widely available at a lower cost allowing more widespread applications. In summary, microarray technology is rapidly having an impact on many areas of biomedical research. Radioisotope-nylon based microarrays offer alternatives that may in some cases be more sensitive, flexible, inexpensive, and universal as compared to other array formats, such as fluorescent-glass arrays. In some situations of limited RNA or exotic species, radioactive membrane microarrays may be the most

  2. Radioactive cDNA microarray in neurospsychiatry

    Energy Technology Data Exchange (ETDEWEB)

    Choe, Jae Gol; Shin, Kyung Ho; Lee, Min Soo; Kim, Meyoung Kon [Korea University Medical School, Seoul (Korea, Republic of)

    2003-02-01

    Microarray technology allows the simultaneous analysis of gene expression patterns of thousands of genes, in a systematic fashion, under a similar set of experimental conditions, thus making the data highly comparable. In some cases arrays are used simply as a primary screen leading to downstream molecular characterization of individual gene candidates. In other cases, the goal of expression profiling is to begin to identify complex regulatory networks underlying developmental processes and disease states. Microarrays were originally used with cell lines or other simple model systems. More recently, microarrays have been used in the analysis of more complex biological tissues including neural systems and the brain. The application of cDNA arrays in neuropsychiatry has lagged behind other fields for a number of reasons. These include a requirement for a large amount of input probe RNA in fluorescent-glass based array systems and the cellular complexity introduced by multicellular brain and neural tissues. An additional factor that impacts the general use of microarrays in neuropsychiatry is the lack of availability of sequenced clone sets from model systems. While human cDNA clones have been widely available, high quality rat, mouse, and drosophilae, among others are just becoming widely available. A final factor in the application of cDNA microarrays in neuropsychiatry is cost of commercial arrays. As academic microarray facilitates become more commonplace custom made arrays will become more widely available at a lower cost allowing more widespread applications. In summary, microarray technology is rapidly having an impact on many areas of biomedical research. Radioisotope-nylon based microarrays offer alternatives that may in some cases be more sensitive, flexible, inexpensive, and universal as compared to other array formats, such as fluorescent-glass arrays. In some situations of limited RNA or exotic species, radioactive membrane microarrays may be the most

  3. Metric learning for DNA microarray data analysis

    International Nuclear Information System (INIS)

    Takeuchi, Ichiro; Nakagawa, Masao; Seto, Masao

    2009-01-01

    In many microarray studies, gene set selection is an important preliminary step for subsequent main task such as tumor classification, cancer subtype identification, etc. In this paper, we investigate the possibility of using metric learning as an alternative to gene set selection. We develop a simple metric learning algorithm aiming to use it for microarray data analysis. Exploiting a property of the algorithm, we introduce a novel approach for extending the metric learning to be adaptive. We apply the algorithm to previously studied microarray data on malignant lymphoma subtype identification.

  4. A flexible and wearable terahertz scanner

    Science.gov (United States)

    Suzuki, D.; Oda, S.; Kawano, Y.

    2016-12-01

    Imaging technologies based on terahertz (THz) waves have great potential for use in powerful non-invasive inspection methods. However, most real objects have various three-dimensional curvatures and existing THz technologies often encounter difficulties in imaging such configurations, which limits the useful range of THz imaging applications. Here, we report the development of a flexible and wearable THz scanner based on carbon nanotubes. We achieved room-temperature THz detection over a broad frequency band ranging from 0.14 to 39 THz and developed a portable THz scanner. Using this scanner, we performed THz imaging of samples concealed behind opaque objects, breakages and metal impurities of a bent film and multi-view scans of a syringe. We demonstrated a passive biometric THz scan of a human hand. Our results are expected to have considerable implications for non-destructive and non-contact inspections, such as medical examinations for the continuous monitoring of health conditions.

  5. Quality assurance of computed tomography (CT) scanners

    International Nuclear Information System (INIS)

    Sankaran, A.; Sanu, K.K. . Email : a_sankaran@vsnl.com

    2004-01-01

    This article reviews the present status of research work and development of various test objects, phantoms and detector/instrumentation systems for quality assurance (QA) of computed tomography (CT) scanners, carried out in advanced countries, with emphasis on similar work done in this research centre. CT scanner is a complex equipment and routine quality control procedures are essential to the maintenance of image quality with optimum patient dose. Image quality can be ensured only through correlation between prospective monitoring of system components and tests of overall performance with standard phantoms. CT examinations contribute a large share to the population dose in advanced countries. The unique dosimetry problems in CT necessitate special techniques. This article describes a comprehensive kit developed indigenously for the following QA and type approval tests as well as for research studies on image quality/dosimetry on CT scanners

  6. Manually operated small envelope scanner system

    Energy Technology Data Exchange (ETDEWEB)

    Sword, Charles Keith

    2017-04-18

    A scanner system and method for acquisition of position-based ultrasonic inspection data are described. The scanner system includes an inspection probe and a first non-contact linear encoder having a first sensor and a first scale to track inspection probe position. The first sensor is positioned to maintain a continuous non-contact interface between the first sensor and the first scale and to maintain a continuous alignment of the first sensor with the inspection probe. The scanner system may be used to acquire two-dimensional inspection probe position data by including a second non-contact linear encoder having a second sensor and a second scale, the second sensor positioned to maintain a continuous non-contact interface between the second sensor and the second scale and to maintain a continuous alignment of the second sensor with the first sensor.

  7. Multimodality Registration without a Dedicated Multimodality Scanner

    Directory of Open Access Journals (Sweden)

    Bradley J. Beattie

    2007-03-01

    Full Text Available Multimodality scanners that allow the acquisition of both functional and structural image sets on a single system have recently become available for animal research use. Although the resultant registered functional/structural image sets can greatly enhance the interpretability of the functional data, the cost of multimodality systems can be prohibitive, and they are often limited to two modalities, which generally do not include magnetic resonance imaging. Using a thin plastic wrap to immobilize and fix a mouse or other small animal atop a removable bed, we are able to calculate registrations between all combinations of four different small animal imaging scanners (positron emission tomography, single-photon emission computed tomography, magnetic resonance, and computed tomography [CT] at our disposal, effectively equivalent to a quadruple-modality scanner. A comparison of serially acquired CT images, with intervening acquisitions on other scanners, demonstrates the ability of the proposed procedures to maintain the rigidity of an anesthetized mouse during transport between scanners. Movement of the bony structures of the mouse was estimated to be 0.62 mm. Soft tissue movement was predominantly the result of the filling (or emptying of the urinary bladder and thus largely constrained to this region. Phantom studies estimate the registration errors for all registration types to be less than 0.5 mm. Functional images using tracers targeted to known structures verify the accuracy of the functional to structural registrations. The procedures are easy to perform and produce robust and accurate results that rival those of dedicated multimodality scanners, but with more flexible registration combinations and while avoiding the expense and redundancy of multimodality systems.

  8. How flatbed scanners upset accurate film dosimetry

    Science.gov (United States)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  9. How flatbed scanners upset accurate film dosimetry

    International Nuclear Information System (INIS)

    Van Battum, L J; Verdaasdonk, R M; Heukelom, S; Huizenga, H

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2–2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red–green–blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. (paper)

  10. A simple scanner for Compton tomography

    CERN Document Server

    Cesareo, R; Brunetti, A; Golosio, B; Castellano, A

    2002-01-01

    A first generation CT-scanner was designed and constructed to carry out Compton images. This CT-scanner is composed of a 80 kV, 5 mA X-ray tube and a NaI(Tl) X-ray detector; the tube is strongly collimated, generating a X-ray beam of 2 mm diameter, whilst the detector is not collimated to collect Compton photons from the whole irradiated cylinder. The performances of the equipment were tested contemporaneous transmission and Compton images.

  11. Advances in hyperspectral remote sensing I: The visible Fourier transform hyperspectral imager

    Directory of Open Access Journals (Sweden)

    J. Bruce Rafert

    2015-05-01

    Full Text Available We discuss early hyperspectral research and development activities during the 1990s that led to the deployment of aircraft and satellite payloads whose heritage was based on the use of visible, spatially modulated, imaging Fourier transform spectrometers, beginning with early experiments at the Florida Institute of Technology, through successful launch and deployment of the Visible Fourier Transform Hyperspectral Imager on MightySat II.1 on 19 July 2000. In addition to a brief chronological overview, we also discuss several of the most interesting optical engineering challenges that were addressed over this timeframe, present some as-yet un-exploited features of field-widened (slit-less SMIFTS instruments, and present some images from ground-based, aircraft-based and satellite-based instruments that helped provide the impetus for the proliferation and development of entire new families of instruments and countless new applications for hyperspectral imaging.

  12. Gene Expression and Microarray Investigation of Dendrobium ...

    African Journals Online (AJOL)

    blood glucose > 16.7 mmol/L were used as the model group and treated with Dendrobium mixture. (DEN ... Keywords: Diabetes, Gene expression, Dendrobium mixture, Microarray testing ..... homeostasis in airway smooth muscle. Am J.

  13. SLIMarray: Lightweight software for microarray facility management

    Directory of Open Access Journals (Sweden)

    Marzolf Bruz

    2006-10-01

    Full Text Available Abstract Background Microarray core facilities are commonplace in biological research organizations, and need systems for accurately tracking various logistical aspects of their operation. Although these different needs could be handled separately, an integrated management system provides benefits in organization, automation and reduction in errors. Results We present SLIMarray (System for Lab Information Management of Microarrays, an open source, modular database web application capable of managing microarray inventories, sample processing and usage charges. The software allows modular configuration and is well suited for further development, providing users the flexibility to adapt it to their needs. SLIMarray Lite, a version of the software that is especially easy to install and run, is also available. Conclusion SLIMarray addresses the previously unmet need for free and open source software for managing the logistics of a microarray core facility.

  14. Classification of High Spatial Resolution, Hyperspectral ...

    Science.gov (United States)

    EPA announced the availability of the final report,Classification of High Spatial Resolution, Hyperspectral Remote Sensing Imagery of the Little Miami River Watershed in Southwest Ohio, USA . This report and associated land use/land cover (LULC) coverage is the result of a collaborative effort among an interdisciplinary team of scientists with the U.S. Environmental Protection Agency's (U.S. EPA's) Office of Research and Development in Cincinnati, Ohio. A primary goal of this project is to enhance the use of geography and spatial analytic tools in risk assessment, and to improve the scientific basis for risk management decisions affecting drinking water and water quality. The land use/land cover classification is derived from 82 flight lines of Compact Airborne Spectrographic Imager (CASI) hyperspectral imagery acquired from July 24 through August 9, 2002 via fixed-wing aircraft.

  15. Software for Simulation of Hyperspectral Images

    Science.gov (United States)

    Richtsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.

    2002-01-01

    A package of software generates simulated hyperspectral images for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport as well as surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, 'ground truth' is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces and the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for and a supplement to field validation data.

  16. Hyperspectral Image Analysis of Food Quality

    DEFF Research Database (Denmark)

    Arngren, Morten

    inspection.Near-infrared spectroscopy can address these issues by offering a fast and objectiveanalysis of the food quality. A natural extension to these single spectrumNIR systems is to include image information such that each pixel holds a NIRspectrum. This augmented image information offers several......Assessing the quality of food is a vital step in any food processing line to ensurethe best food quality and maximum profit for the farmer and food manufacturer.Traditional quality evaluation methods are often destructive and labourintensive procedures relying on wet chemistry or subjective human...... extensions to the analysis offood quality. This dissertation is concerned with hyperspectral image analysisused to assess the quality of single grain kernels. The focus is to highlight thebenefits and challenges of using hyperspectral imaging for food quality presentedin two research directions. Initially...

  17. Estimating physiological skin parameters from hyperspectral signatures

    Science.gov (United States)

    Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe

    2013-05-01

    We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.

  18. Atmospheric correction of APEX hyperspectral data

    Directory of Open Access Journals (Sweden)

    Sterckx Sindy

    2016-03-01

    Full Text Available Atmospheric correction plays a crucial role among the processing steps applied to remotely sensed hyperspectral data. Atmospheric correction comprises a group of procedures needed to remove atmospheric effects from observed spectra, i.e. the transformation from at-sensor radiances to at-surface radiances or reflectances. In this paper we present the different steps in the atmospheric correction process for APEX hyperspectral data as applied by the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. The MODerate resolution atmospheric TRANsmission program (MODTRAN is used to determine the source of radiation and for applying the actual atmospheric correction. As part of the overall correction process, supporting algorithms are provided in order to derive MODTRAN configuration parameters and to account for specific effects, e.g. correction for adjacency effects, haze and shadow correction, and topographic BRDF correction. The methods and theory underlying these corrections and an example of an application are presented.

  19. LIFTERS-hyperspectral imaging at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Fields, D. [Lawrence Livermore National Lab., CA (United States); Bennett, C.; Carter, M.

    1994-11-15

    LIFTIRS, the Livermore Imaging Fourier Transform InfraRed Spectrometer, recently developed at LLNL, is an instrument which enables extremely efficient collection and analysis of hyperspectral imaging data. LIFTIRS produces a spatial format of 128x128 pixels, with spectral resolution arbitrarily variable up to a maximum of 0.25 inverse centimeters. Time resolution and spectral resolution can be traded off for each other with great flexibility. We will discuss recent measurements made with this instrument, and present typical images and spectra.

  20. Ore minerals textural characterization by hyperspectral imaging

    Science.gov (United States)

    Bonifazi, Giuseppe; Picone, Nicoletta; Serranti, Silvia

    2013-02-01

    The utilization of hyperspectral detection devices, for natural resources mapping/exploitation through remote sensing techniques, dates back to the early 1970s. From the first devices utilizing a one-dimensional profile spectrometer, HyperSpectral Imaging (HSI) devices have been developed. Thus, from specific-customized devices, originally developed by Governmental Agencies (e.g. NASA, specialized research labs, etc.), a lot of HSI based equipment are today available at commercial level. Parallel to this huge increase of hyperspectral systems development/manufacturing, addressed to airborne application, a strong increase also occurred in developing HSI based devices for "ground" utilization that is sensing units able to play inside a laboratory, a processing plant and/or in an open field. Thanks to this diffusion more and more applications have been developed and tested in this last years also in the materials sectors. Such an approach, when successful, is quite challenging being usually reliable, robust and characterised by lower costs if compared with those usually associated to commonly applied analytical off- and/or on-line analytical approaches. In this paper such an approach is presented with reference to ore minerals characterization. According to the different phases and stages of ore minerals and products characterization, and starting from the analyses of the detected hyperspectral firms, it is possible to derive useful information about mineral flow stream properties and their physical-chemical attributes. This last aspect can be utilized to define innovative process mineralogy strategies and to implement on-line procedures at processing level. The present study discusses the effects related to the adoption of different hardware configurations, the utilization of different logics to perform the analysis and the selection of different algorithms according to the different characterization, inspection and quality control actions to apply.

  1. PATMA: parser of archival tissue microarray

    Directory of Open Access Journals (Sweden)

    Lukasz Roszkowiak

    2016-12-01

    Full Text Available Tissue microarrays are commonly used in modern pathology for cancer tissue evaluation, as it is a very potent technique. Tissue microarray slides are often scanned to perform computer-aided histopathological analysis of the tissue cores. For processing the image, splitting the whole virtual slide into images of individual cores is required. The only way to distinguish cores corresponding to specimens in the tissue microarray is through their arrangement. Unfortunately, distinguishing the correct order of cores is not a trivial task as they are not labelled directly on the slide. The main aim of this study was to create a procedure capable of automatically finding and extracting cores from archival images of the tissue microarrays. This software supports the work of scientists who want to perform further image processing on single cores. The proposed method is an efficient and fast procedure, working in fully automatic or semi-automatic mode. A total of 89% of punches were correctly extracted with automatic selection. With an addition of manual correction, it is possible to fully prepare the whole slide image for extraction in 2 min per tissue microarray. The proposed technique requires minimum skill and time to parse big array of cores from tissue microarray whole slide image into individual core images.

  2. Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint

    Science.gov (United States)

    Khoshsokhan, S.; Rajabi, R.; Zayyani, H.

    2017-09-01

    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.

  3. Hyperspectral image classification using Support Vector Machine

    International Nuclear Information System (INIS)

    Moughal, T A

    2013-01-01

    Classification of land cover hyperspectral images is a very challenging task due to the unfavourable ratio between the number of spectral bands and the number of training samples. The focus in many applications is to investigate an effective classifier in terms of accuracy. The conventional multiclass classifiers have the ability to map the class of interest but the considerable efforts and large training sets are required to fully describe the classes spectrally. Support Vector Machine (SVM) is suggested in this paper to deal with the multiclass problem of hyperspectral imagery. The attraction to this method is that it locates the optimal hyper plane between the class of interest and the rest of the classes to separate them in a new high-dimensional feature space by taking into account only the training samples that lie on the edge of the class distributions known as support vectors and the use of the kernel functions made the classifier more flexible by making it robust against the outliers. A comparative study has undertaken to find an effective classifier by comparing Support Vector Machine (SVM) to the other two well known classifiers i.e. Maximum likelihood (ML) and Spectral Angle Mapper (SAM). At first, the Minimum Noise Fraction (MNF) was applied to extract the best possible features form the hyperspectral imagery and then the resulting subset of the features was applied to the classifiers. Experimental results illustrate that the integration of MNF and SVM technique significantly reduced the classification complexity and improves the classification accuracy.

  4. PET and PVC Separation with Hyperspectral Imagery

    Science.gov (United States)

    Moroni, Monica; Mei, Alessandro; Leonardi, Alessandra; Lupo, Emanuela; La Marca, Floriana

    2015-01-01

    Traditional plants for plastic separation in homogeneous products employ material physical properties (for instance density). Due to the small intervals of variability of different polymer properties, the output quality may not be adequate. Sensing technologies based on hyperspectral imaging have been introduced in order to classify materials and to increase the quality of recycled products, which have to comply with specific standards determined by industrial applications. This paper presents the results of the characterization of two different plastic polymers—polyethylene terephthalate (PET) and polyvinyl chloride (PVC)—in different phases of their life cycle (primary raw materials, urban and urban-assimilated waste and secondary raw materials) to show the contribution of hyperspectral sensors in the field of material recycling. This is accomplished via near-infrared (900–1700 nm) reflectance spectra extracted from hyperspectral images acquired with a two-linear-spectrometer apparatus. Results have shown that a rapid and reliable identification of PET and PVC can be achieved by using a simple two near-infrared wavelength operator coupled to an analysis of reflectance spectra. This resulted in 100% classification accuracy. A sensor based on this identification method appears suitable and inexpensive to build and provides the necessary speed and performance required by the recycling industry. PMID:25609050

  5. PET and PVC separation with hyperspectral imagery.

    Science.gov (United States)

    Moroni, Monica; Mei, Alessandro; Leonardi, Alessandra; Lupo, Emanuela; Marca, Floriana La

    2015-01-20

    Traditional plants for plastic separation in homogeneous products employ material physical properties (for instance density). Due to the small intervals of variability of different polymer properties, the output quality may not be adequate. Sensing technologies based on hyperspectral imaging have been introduced in order to classify materials and to increase the quality of recycled products, which have to comply with specific standards determined by industrial applications. This paper presents the results of the characterization of two different plastic polymers--polyethylene terephthalate (PET) and polyvinyl chloride (PVC)--in different phases of their life cycle (primary raw materials, urban and urban-assimilated waste and secondary raw materials) to show the contribution of hyperspectral sensors in the field of material recycling. This is accomplished via near-infrared (900-1700 nm) reflectance spectra extracted from hyperspectral images acquired with a two-linear-spectrometer apparatus. Results have shown that a rapid and reliable identification of PET and PVC can be achieved by using a simple two near-infrared wavelength operator coupled to an analysis of reflectance spectra. This resulted in 100% classification accuracy. A sensor based on this identification method appears suitable and inexpensive to build and provides the necessary speed and performance required by the recycling industry.

  6. Evaluation of camouflage effectiveness using hyperspectral images

    Science.gov (United States)

    Zavvartorbati, Ahmad; Dehghani, Hamid; Rashidi, Ali Jabar

    2017-10-01

    Recent advances in camouflage engineering have made it more difficult to detect targets. Assessing the effectiveness of camouflage against different target detection methods leads to identifying the strengths and weaknesses of camouflage designs. One of the target detection methods is to analyze the content of the scene using remote sensing hyperspectral images. In the process of evaluating camouflage designs, there must be comprehensive and efficient evaluation criteria. Three parameters were considered as the main factors affecting the target detection and based on these factors, camouflage effectiveness assessment criteria were proposed. To combine the criteria in the form of a single equation, the equation used in target visual search models was employed and for determining the criteria, a model was presented based on the structure of the computational visual attention systems. Also, in software implementations on the HyMap hyperspectral image, a variety of camouflage levels were created for the real targets in the image. Assessing the camouflage levels using the proposed criteria, comparing and analyzing the results can show that the provided criteria and model are effective for the evaluation of camouflage designs using hyperspectral images.

  7. PET and PVC Separation with Hyperspectral Imagery

    Directory of Open Access Journals (Sweden)

    Monica Moroni

    2015-01-01

    Full Text Available Traditional plants for plastic separation in homogeneous products employ material physical properties (for instance density. Due to the small intervals of variability of different polymer properties, the output quality may not be adequate. Sensing technologies based on hyperspectral imaging have been introduced in order to classify materials and to increase the quality of recycled products, which have to comply with specific standards determined by industrial applications. This paper presents the results of the characterization of two different plastic polymers—polyethylene terephthalate (PET and polyvinyl chloride (PVC—in different phases of their life cycle (primary raw materials, urban and urban-assimilated waste and secondary raw materials to show the contribution of hyperspectral sensors in the field of material recycling. This is accomplished via near-infrared (900–1700 nm reflectance spectra extracted from hyperspectral images acquired with a two-linear-spectrometer apparatus. Results have shown that a rapid and reliable identification of PET and PVC can be achieved by using a simple two near-infrared wavelength operator coupled to an analysis of reflectance spectra. This resulted in 100% classification accuracy. A sensor based on this identification method appears suitable and inexpensive to build and provides the necessary speed and performance required by the recycling industry.

  8. Inter laboratory comparison of industrial CT scanners

    DEFF Research Database (Denmark)

    Angel, Jais Andreas Breusch; Cantatore, Angela; De Chiffre, Leonardo

    2012-01-01

    In this report results from an intercomparison of industrial CT scanners are presented. Three audit items, similar to common industrial parts, were selected for circulation: a single polymer part with complex geometry (Item 1), a simple geometry part made of two polymers (Item 2) and a miniature...

  9. Developments in holographic-based scanner designs

    Science.gov (United States)

    Rowe, David M.

    1997-07-01

    Holographic-based scanning systems have been used for years in the high resolution prepress markets where monochromatic lasers are generally utilized. However, until recently, due to the dispersive properties of holographic optical elements (HOEs), along with the high cost associated with recording 'master' HOEs, holographic scanners have not been able to penetrate major scanning markets such as the laser printer and digital copier markets, low to mid-range imagesetter markets, and the non-contact inspection scanner market. Each of these markets has developed cost effective laser diode based solutions using conventional scanning approaches such as polygon/f-theta lens combinations. In order to penetrate these markets, holographic-based systems must exhibit low cost and immunity to wavelength shifts associated with laser diodes. This paper describes recent developments in the design of holographic scanners in which multiple HOEs, each possessing optical power, are used in conjunction with one curved mirror to passively correct focal plane position errors and spot size changes caused by the wavelength instability of laser diodes. This paper also describes recent advancements in low cost production of high quality HOEs and curved mirrors. Together these developments allow holographic scanners to be economically competitive alternatives to conventional devices in every segment of the laser scanning industry.

  10. A PET scanner developed by CERN

    CERN Multimedia

    Laurent Guiraud

    1998-01-01

    This image shows a Position Emission Tomography (PET) scanner at the Hopital Cantonal Universitaire de Genève. Development of the multiwire proportional chamber at CERN in the mid-1970s was soon seen as a potential device for medical imaging. It is much more sensitive than previous devices and greatly reduced the dose of radiation received by the patient.

  11. Wire scanner software and firmware issues

    International Nuclear Information System (INIS)

    Gilpatrick, John Doug

    2008-01-01

    The Los Alamos Neutron Science Center facility presently has 110 slow wire scanning profile measurement instruments located along its various beam lines. These wire scanners were developed and have been operating for at least 30 years. While the wire scanners solved many problems to operate and have served the facility well they have increasingly suffered from several problems or limitations, such as maintenance and reliability problems, antiquated components, slow data acquisition, and etc. In order to refurbish these devices, these wire scanners will be replaced with newer versions. The replacement will consist of a completely new beam line actuator, new cables, new electronics and brand new software and firmware. This note describes the functions and modes of operation that LabVIEW VI software on the real time controller and FPGA LabVIEW firmware will be required. It will be especially interesting to understand the overall architecture of these LabVIEW VIs. While this note will endeavor to describe all of the requirements and issues for the wire scanners, undoubtedly, there will be missing details that will be added as time progresses.

  12. Learning and Teaching with a Computer Scanner

    Science.gov (United States)

    Planinsic, G.; Gregorcic, B.; Etkina, E.

    2014-01-01

    This paper introduces the readers to simple inquiry-based activities (experiments with supporting questions) that one can do with a computer scanner to help students learn and apply the concepts of relative motion in 1 and 2D, vibrational motion and the Doppler effect. We also show how to use these activities to help students think like…

  13. Current segmented gamma-ray scanner technology

    International Nuclear Information System (INIS)

    Bjork, C.W.

    1987-01-01

    A new generation of segmented gamma-ray scanners has been developed at Los Alamos for scrap and waste measurements at the Savannah River Plant and the Los Alamos Plutonium Facility. The new designs are highly automated and exhibit special features such as good segmentation and thorough shielding to improve performance

  14. Get Mobile – The Smartphone Brain Scanner

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Stopczynski, Arkadiusz; Petersen, Michael Kai

    This demonstration will provide live-interaction with a smartphone brain scanner consisting of a low-cost wireless 14-channel EEG headset (Emotiv Epoc) and a mobile device. With our system it is possible to perform real-time functional brain imaging on a smartphone device, including stimulus...

  15. Scanner and irradiation: optimization of protocols

    International Nuclear Information System (INIS)

    Duchemin, J.; Martine-Rollet, B.; Lienart, S.; Mobailly, M.; Florin, J.P.; Beregi, J.P.; Puech, N.

    2006-01-01

    The irradiation of the patient or the personnel increased with the arrival of the multi-detector scanners. The objective of this work is to realize a didactic poster to inform and make sensitive on the irradiation with scan so that to propose solutions of protection. (N.C.)

  16. submitter Dynamical Models of a Wire Scanner

    CERN Document Server

    Barjau, Ana; Dehning, Bernd

    2016-01-01

    The accuracy of the beam profile measurements achievable by the current wire scanners at CERN is limited by the vibrations of their mechanical parts. In particular, the vibrations of the carbon wire represent the major source of wire position uncertainty which limits the beam profile measurement accuracy. In the coming years, due to the Large Hadron Collider (LHC) luminosity upgrade, a wire traveling speed up to 20 $m s^{−1}$ and a position measurement accuracy of the order of 1 μm will be required. A new wire scanner design based on the understanding of the wire vibration origin is therefore needed. We present the models developed to understand the main causes of the wire vibrations observed in an existing wire scanner. The development and tuning of those models are based on measurements and tests performed on that CERN proton synchrotron (PS) scanner. The final model for the (wire + fork) system has six degrees-of-freedom (DOF). The wire equations contain three different excitation terms: inertia...

  17. Hyperspectral microscopy to identify foodborne bacteria with optimum lighting source

    Science.gov (United States)

    Hyperspectral microscopy is an emerging technology for rapid detection of foodborne pathogenic bacteria. Since scattering spectral signatures from hyperspectral microscopic images (HMI) vary with lighting sources, it is important to select optimal lights. The objective of this study is to compare t...

  18. D Reconstruction from Uav-Based Hyperspectral Images

    Science.gov (United States)

    Liu, L.; Xu, L.; Peng, J.

    2018-04-01

    Reconstructing the 3D profile from a set of UAV-based images can obtain hyperspectral information, as well as the 3D coordinate of any point on the profile. Our images are captured from the Cubert UHD185 (UHD) hyperspectral camera, which is a new type of high-speed onboard imaging spectrometer. And it can get both hyperspectral image and panchromatic image simultaneously. The panchromatic image have a higher spatial resolution than hyperspectral image, but each hyperspectral image provides considerable information on the spatial spectral distribution of the object. Thus there is an opportunity to derive a high quality 3D point cloud from panchromatic image and considerable spectral information from hyperspectral image. The purpose of this paper is to introduce our processing chain that derives a database which can provide hyperspectral information and 3D position of each point. First, We adopt a free and open-source software, Visual SFM which is based on structure from motion (SFM) algorithm, to recover 3D point cloud from panchromatic image. And then get spectral information of each point from hyperspectral image by a self-developed program written in MATLAB. The production can be used to support further research and applications.

  19. Postfire soil burn severity mapping with hyperspectral image unmixing

    Science.gov (United States)

    Peter R. Robichaud; Sarah A. Lewis; Denise Y. M. Laes; Andrew T. Hudak; Raymond F. Kokaly; Joseph A. Zamudio

    2007-01-01

    Burn severity is mapped after wildfires to evaluate immediate and long-term fire effects on the landscape. Remotely sensed hyperspectral imagery has the potential to provide important information about fine-scale ground cover components that are indicative of burn severity after large wildland fires. Airborne hyperspectral imagery and ground data were collected after...

  20. Evaluation of Handheld Scanners for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Wadea Ameen

    2018-01-01

    Full Text Available The process of generating a computerized geometric model for an existing part is known as Reverse Engineering (RE. It is a very useful technique in product development and plays a significant role in automotive, aerospace, and medical industries. In fact, it has been getting remarkable attention in manufacturing industries owing to its advanced data acquisition technologies. The process of RE is based on two primary steps: data acquisition (also known as scanning and data processing. To facilitate point data acquisition, a variety of scanning systems is available with different capabilities and limitations. Although the optical control of 3D scanners is fully developed, still several factors can affect the quality of the scanned data. As a result, the proper selection of scanning parameters, such as resolution, laser power, shutter time, etc., becomes very crucial. This kind of investigation can be very helpful and provide its users with guidelines to identify the appropriate factors. Moreover, it is worth noting that no single system is ideal in all applications. Accordingly, this work has compared two portable (handheld systems based on laser scanning and white light optical scanning for automotive applications. A car door containing a free-form surface has been used to achieve the above-mentioned goal. The design of experiments has been employed to determine the effects of different scanning parameters and optimize them. The capabilities and limitations have been identified by comparing the two scanners in terms of accuracy, scanning time, triangle numbers, ease of use, and portability. Then, the relationships between the system capabilities and the application requirements have been established. The results revealed that the laser scanner performed better than the white light scanner in terms of accuracy, while the white light scanner performed better in terms of acquisition speed and triangle numbers.

  1. Blind estimation of blur in hyperspectral images

    Science.gov (United States)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2017-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial

  2. Body scanners: are they dangerous for health?; Scanners corporels: dangereux pour la sante?

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2010-07-01

    As there is a debate about the risk of cancer and of congenital malformation associated with the use of body scanners, notably in airports, this document recalls and comments the IAEA statement on this issue. According to a study performed by this international agency, the irradiation dose is very low. But the French IRSN is more prudent and recommends not to use X ray scanner, but to look for technologies which do not use ionizing radiation

  3. Characterization of Bovine Serum Albumin Blocking Efficiency on Epoxy-Functionalized Substrates for Microarray Applications.

    Science.gov (United States)

    Sun, Yung-Shin; Zhu, Xiangdong

    2016-10-01

    Microarrays provide a platform for high-throughput characterization of biomolecular interactions. To increase the sensitivity and specificity of microarrays, surface blocking is required to minimize the nonspecific interactions between analytes and unprinted yet functionalized surfaces. To block amine- or epoxy-functionalized substrates, bovine serum albumin (BSA) is one of the most commonly used blocking reagents because it is cheap and easy to use. Based on standard protocols from microarray manufactories, a BSA concentration of 1% (10 mg/mL or 200 μM) and reaction time of at least 30 min are required to efficiently block epoxy-coated slides. In this paper, we used both fluorescent and label-free methods to characterize the BSA blocking efficiency on epoxy-functionalized substrates. The blocking efficiency of BSA was characterized using a fluorescent scanner and a label-free oblique-incidence reflectivity difference (OI-RD) microscope. We found that (1) a BSA concentration of 0.05% (0.5 mg/mL or 10 μM) could give a blocking efficiency of 98%, and (2) the BSA blocking step took only about 5 min to be complete. Also, from real-time and in situ measurements, we were able to calculate the conformational properties (thickness, mass density, and number density) of BSA molecules deposited on the epoxy surface. © 2015 Society for Laboratory Automation and Screening.

  4. Occurrence and characteristics of mutual interference between LIDAR scanners

    Science.gov (United States)

    Kim, Gunzung; Eom, Jeongsook; Park, Seonghyeon; Park, Yongwan

    2015-05-01

    The LIDAR scanner is at the heart of object detection of the self-driving car. Mutual interference between LIDAR scanners has not been regarded as a problem because the percentage of vehicles equipped with LIDAR scanners was very rare. With the growing number of autonomous vehicle equipped with LIDAR scanner operated close to each other at the same time, the LIDAR scanner may receive laser pulses from other LIDAR scanners. In this paper, three types of experiments and their results are shown, according to the arrangement of two LIDAR scanners. We will show the probability that any LIDAR scanner will interfere mutually by considering spatial and temporal overlaps. It will present some typical mutual interference scenario and report an analysis of the interference mechanism.

  5. Generalization of DNA microarray dispersion properties: microarray equivalent of t-distribution

    DEFF Research Database (Denmark)

    Novak, Jaroslav P; Kim, Seon-Young; Xu, Jun

    2006-01-01

    BACKGROUND: DNA microarrays are a powerful technology that can provide a wealth of gene expression data for disease studies, drug development, and a wide scope of other investigations. Because of the large volume and inherent variability of DNA microarray data, many new statistical methods have...

  6. Nanotechnology: moving from microarrays toward nanoarrays.

    Science.gov (United States)

    Chen, Hua; Li, Jun

    2007-01-01

    Microarrays are important tools for high-throughput analysis of biomolecules. The use of microarrays for parallel screening of nucleic acid and protein profiles has become an industry standard. A few limitations of microarrays are the requirement for relatively large sample volumes and elongated incubation time, as well as the limit of detection. In addition, traditional microarrays make use of bulky instrumentation for the detection, and sample amplification and labeling are quite laborious, which increase analysis cost and delays the time for obtaining results. These problems limit microarray techniques from point-of-care and field applications. One strategy for overcoming these problems is to develop nanoarrays, particularly electronics-based nanoarrays. With further miniaturization, higher sensitivity, and simplified sample preparation, nanoarrays could potentially be employed for biomolecular analysis in personal healthcare and monitoring of trace pathogens. In this chapter, it is intended to introduce the concept and advantage of nanotechnology and then describe current methods and protocols for novel nanoarrays in three aspects: (1) label-free nucleic acids analysis using nanoarrays, (2) nanoarrays for protein detection by conventional optical fluorescence microscopy as well as by novel label-free methods such as atomic force microscopy, and (3) nanoarray for enzymatic-based assay. These nanoarrays will have significant applications in drug discovery, medical diagnosis, genetic testing, environmental monitoring, and food safety inspection.

  7. Integrative missing value estimation for microarray data.

    Science.gov (United States)

    Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine

    2006-10-12

    Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  8. Integrative missing value estimation for microarray data

    Directory of Open Access Journals (Sweden)

    Zhou Xianghong

    2006-10-01

    Full Text Available Abstract Background Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. Results We present the integrative Missing Value Estimation method (iMISS by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS imputation algorithm by up to 15% improvement in our benchmark tests. Conclusion We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  9. 21 CFR 882.1925 - Ultrasonic scanner calibration test block.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ultrasonic scanner calibration test block. 882... Ultrasonic scanner calibration test block. (a) Identification. An ultrasonic scanner calibration test block is a block of material with known properties used to calibrate ultrasonic scanning devices (e.g., the...

  10. Laser scanner 3D terrestri e mobile

    Directory of Open Access Journals (Sweden)

    Mario Ciamba

    2013-08-01

    Full Text Available Recentemente si è svolto a Roma un evento dimostrativo per informare, professionisti e ricercatori del settore inerente il rilievo strumentale, sulle recenti innovazioni che riguardano i laser scanner 3d. Il mercato della strumentazione dedicata al rilevamento architettonico e dell'ambiente, offre molte possibilità di scelta. Oggi i principali marchi producono strumenti sempre più efficienti ed ideati per ambiti di applicazione specifici, permettendo ai professionisti, la giusta scelta in termini di prestazioni ed economia. A demonstration event was recently held in Rome with the aim to inform professionals and researchers on recent innovations on instrumental survey related to the 3d laser scanner. The market of instrumentation for architectural survey offers many possibilitiesof choice. Today the major brands produce instruments that are more efficient and designed for specific areas of application, allowing the right choice in terms of performance and economy.

  11. Laser scanner 3D terrestri e mobile

    Directory of Open Access Journals (Sweden)

    Mario Ciamba

    2013-08-01

    Full Text Available Recentemente si è svolto a Roma un evento dimostrativo per informare, professionisti e ricercatori del settore inerente il rilievo strumentale, sulle recenti innovazioni che riguardano i laser scanner 3d. Il mercato della strumentazione dedicata al rilevamento architettonico e dell'ambiente, offre molte possibilità di scelta. Oggi i principali marchi producono strumenti sempre più efficienti ed ideati per ambiti di applicazione specifici, permettendo ai professionisti, la giusta scelta in termini di prestazioni ed economia.A demonstration event was recently held in Rome with the aim to inform professionals and researchers on recent innovations on instrumental survey related to the 3d laser scanner. The market of instrumentation for architectural survey offers many possibilitiesof choice. Today the major brands produce instruments that are more efficient and designed for specific areas of application, allowing the right choice in terms of performance and economy.

  12. Detector Position Estimation for PET Scanners.

    Science.gov (United States)

    Pierce, Larry; Miyaoka, Robert; Lewellen, Tom; Alessio, Adam; Kinahan, Paul

    2012-06-11

    Physical positioning of scintillation crystal detector blocks in Positron Emission Tomography (PET) scanners is not always exact. We test a proof of concept methodology for the determination of the six degrees of freedom for detector block positioning errors by utilizing a rotating point source over stepped axial intervals. To test our method, we created computer simulations of seven Micro Crystal Element Scanner (MiCES) PET systems with randomized positioning errors. The computer simulations show that our positioning algorithm can estimate the positions of the block detectors to an average of one-seventh of the crystal pitch tangentially, and one-third of the crystal pitch axially. Virtual acquisitions of a point source grid and a distributed phantom show that our algorithm improves both the quantitative and qualitative accuracy of the reconstructed objects. We believe this estimation algorithm is a practical and accurate method for determining the spatial positions of scintillation detector blocks.

  13. Detector position estimation for PET scanners

    International Nuclear Information System (INIS)

    Pierce, Larry; Miyaoka, Robert; Lewellen, Tom; Alessio, Adam; Kinahan, Paul

    2012-01-01

    Physical positioning of scintillation crystal detector blocks in Positron Emission Tomography (PET) scanners is not always exact. We test a proof of concept methodology for the determination of the six degrees of freedom for detector block positioning errors by utilizing a rotating point source over stepped axial intervals. To test our method, we created computer simulations of seven Micro Crystal Element Scanner (MiCES) PET systems with randomized positioning errors. The computer simulations show that our positioning algorithm can estimate the positions of the block detectors to an average of one-seventh of the crystal pitch tangentially, and one-third of the crystal pitch axially. Virtual acquisitions of a point source grid and a distributed phantom show that our algorithm improves both the quantitative and qualitative accuracy of the reconstructed objects. We believe this estimation algorithm is a practical and accurate method for determining the spatial positions of scintillation detector blocks.

  14. Ghost signals in Allison emittance scanners

    International Nuclear Information System (INIS)

    Stockli, Martin P.; Leitner, M.; Moehs, D.P.; Keller, R.; Welton, R.F.

    2004-01-01

    For over 20 years, Allison scanners have been used to measure emittances of low-energy ion beams. We show that scanning large trajectory angles produces ghost signals caused by the sampled beamlet impacting on an electric deflection plate. The ghost signal strength is proportional to the amount of beam entering the scanner. Depending on the ions, and their velocity, the ghost signals can have the opposite or the same polarity as the main beam signals. The ghost signals cause significant errors in the emittance estimates because they appear at large trajectory angles. These ghost signals often go undetected because they partly overlap with the real signals, are mostly below the 1% level, and often hide in the noise. A simple deflection plate modification is shown to reduce the ghost signal strength by over 99%

  15. Ghost Signals In Allison Emittance Scanners

    International Nuclear Information System (INIS)

    Stockli, Martin P.; Leitner, M.; Keller, R.; Moehs, D.P.; Welton, R. F.

    2005-01-01

    For over 20 years, Allison scanners have been used to measure emittances of low-energy ion beams. We show that scanning large trajectory angles produces ghost signals caused by the sampled beamlet impacting on an electric deflection plate. The ghost signal strength is proportional to the amount of beam entering the scanner. Depending on the ions, and their velocity, the ghost signals can have the opposite or the same polarity as the main beam signals. The ghost signals cause significant errors in the emittance estimates because they appear at large trajectory angles. These ghost signals often go undetected because they partly overlap with the real signals, are mostly below the 1% level, and often hide in the noise. A simple deflection plate modification is shown to reduce the ghost signal strength by over 99%

  16. Development of high pressure pipe scanners

    International Nuclear Information System (INIS)

    Kim, Jae H.; Lee, Jae C.; Moon, Soon S.; Eom, Heung S.; Choi, Yu R.

    1998-12-01

    This report describes an automatic ultrasonic scanning system for pressure pipe welds, which was developed in this project using recent advanced technologies on mobile robot and computer. The system consists of two modules: a robot scanner module which navigates and manipulates scanning devices, and a data acquisition module which generates ultrasonic signal and processes the data from the scanner. The robot has 4 magnetic wheels and 2 -axis manipulator on which ultrasonic transducer attached. The wheeled robot can navigate curved surface such as outer wall of circular pipes. Magnetic wheels were optimally designed through magnetic field analysis. Free surface sensing and line tracking control algorithm were developed and implemented, and the control devices and software can be used in practical inspection works. We expect our system can contribute to reduction of inspection time, performance enhancement, and effective management of inspection results

  17. Imaging Scanner Usage in Radiochemical Purity Test

    International Nuclear Information System (INIS)

    Norhafizah Othman; Yahaya Talib; Wan Hamirul Bahrin Wan Kamal

    2011-01-01

    Imaging Scanner model BIOSCAN AR-2000 has been used in the radiochemical purity test for the product of Mo-99/ Tc-99m generator. Result from this test was produced directly where the percentage of pertechnetate was calculated based on width peak area by thin layer chromatography. This paperwork will explain the function, procedure, calibration of the instrument and discussed the advantages compared to the previous method. (author)

  18. Neurosurgical operating computerized tomographic scanner system. The CT scanner in the operating theater

    Energy Technology Data Exchange (ETDEWEB)

    Okudera, Hiroshi; Sugita, Kenichiro; Kobayashi, Shigeaki; Kimishima, Sakae; Yoshida, Hisashi

    1988-12-01

    A neurosurgical operating computerized tomography scanner system is presented. This system has been developed for obtaining intra- and postoperative CT images in the operating room. A TCT-300 scanner (manufactured by the Toshiba Co., Tokyo) is placed in the operating room. The realization of a true intraoperative CT image requires certain improvements in the CT scanner and operating table. To adjust the axis of the co-ordinates of the motor system of the MST-7000 microsurgical operating table (manufactured by the Mizuho Ika Co., Tokyo) to the CT scanner, we have designed an interface and a precise motor system so that the computer of the CT scanner can directly control the movement of the operating table. Furthermore, a new head-fixation system has been designed for producing artifact-free intraoperative CT images. The head-pins of the head-fixation system are made of carbon-fiber bars and titanium tips. A simulation study of the total system in the operating room with the CT scanner, operating table, and head holder using a skull model yielded a degree of error similar to that in the phantom testing of the original scanner. Three patients underwent resection of a glial tumor using this system. Intraoperative CT scans taken after dural opening showed a bulging of the cortex, a shift in the central structure, and a displacement of the cortical subarachnoid spaces under the influence of gravity. With a contrast medium the edge of the surrounding brain after resection was enhanced and the residual tumor mass was demonstrated clearly. This system makes it possible to obtain a noninvasive intraoperative image in a situation where structural shifts are taking place.

  19. Laser measuring scanners and their accuracy limits

    Science.gov (United States)

    Jablonski, Ryszard

    1993-09-01

    Scanning methods have gained the greater importance for some years now due to a short measuring time and wide range of application in flexible manufacturing processes. This paper is a summing up of the autho?s creative scientific work in the field of measuring scanners. The research conducted allowed to elaborate the optimal configurations of measuring systems based on the scanning method. An important part of the work was the analysis of a measuring scanner - as a transducer of an angle rotation into the linear displacement which resulted in obtaining its much higher accuracy and finally in working out a measuring scanner eliminating the use of an additional reference standard. The completion of the work is an attempt to determine an attainable accuracy limit of scanning measurement of both length and angle. Using a high stability deflector and a corrected scanning lens one can obtain the angle determination over 30 (or 2 mm) to an accuracy 0 (or 0 tm) when the measuring rate is 1000 Hz or the range d60 (4 mm) with accuracy 0 " (0 jim) and measurement frequency 6 Hz.

  20. The CT scanner as a therapy machine

    International Nuclear Information System (INIS)

    Iwamoto, K.S.; Norman, A.

    1990-01-01

    Many tumors in the brain and in other tissues can be delineated precisely in images obtained with a CT scanner. After the scan is obtained the patient is taken to another room for radiation therapy and is positioned in the beam with the aid of external markers, simulators or stereotactic devices. This procedure is time consuming and subject to error when precise localization of the beam is desired. The CT scanner itself, with the addition of the collimator, is capable of delivering radiation therapy with great precision without the need for external markers. The patient can be scanned and treated on the same table, the isocenter of the beam can be placed precisely in the center of the lesion, the beam can be restricted to just those planes in which the lesion appears, several arcs can be obtained by simply tilting the gantry, and the position of the patient in the beam can be monitored continuously during therapy. The authors describe the properties of the CTX, the CT scanner modified for therapy. (author). 6 refs.; 6 figs

  1. A scanner for single photon emission tomography

    International Nuclear Information System (INIS)

    Smith, D.B.; Cumpstey, D.E.; Evans, N.T.S.; Coleman, J.D.; Ettinger, K.V.; Mallard, J.R.

    1982-01-01

    The technique of single photon ECT has now been available for some eighteen years, but has yet still to be exploited fully. The difficulties of doing this lie in the need for gathering data of sufficiently good statistical accuracy in a reasonable counting time, in the uniformity of detector sensitivity, and in the means for correcting the image satisfactorily for photon attenuation within the body. The relative ease with which a general purpose gamma camera can be adapted to give rotation around the patient makes this an attractive practical approach to the problem. However, the sensitivity of gamma cameras over their field of view is by no means uniform, and their sensitivity is less good than that of purpose-designed scanners when no more than about ten sections through the body are required. There is therefore a need to assess the clinical usefulness of a whole body tomographic scanner of high sensitivity and uniformity. Such a machine is the Aberdeen Section Scanner Mark II described

  2. A near-infrared confocal scanner

    International Nuclear Information System (INIS)

    Lee, Seungwoo; Yoo, Hongki

    2014-01-01

    In the semiconductor industry, manufacturing of three-dimensional (3D) packages or 3D integrated circuits is a high-performance technique that requires combining several functions in a small volume. Through-silicon vias, which are vertical electrical connections extending through a wafer, can be used to direct signals between stacked chips, thus increasing areal density by stacking and connecting multiple patterned chips. While defect detection is essential in the semiconductor manufacturing process, it is difficult to identify defects within a wafer or to monitor the bonding results between bonded surfaces because silicon and many other semiconductor materials are opaque to visible wavelengths. In this context, near-infrared (NIR) imaging is a promising non-destructive method to detect defects within silicon chips, to inspect bonding between chips and to monitor the chip alignment since NIR transmits through silicon. In addition, a confocal scanner provides high-contrast, optically-sectioned images of the specimen due to its ability to reject out-of-focus noise. In this study, we report an NIR confocal scanner that rapidly acquires high-resolution images with a large field of view through silicon. Two orthogonal line-scanning images can be acquired without rotating the system or the specimen by utilizing two orthogonally configured resonant scanning mirrors. This NIR confocal scanner can be efficiently used as an in-line inspection system when manufacturing semiconductor devices by rapidly detecting defects on and beneath the surface. (paper)

  3. Impedance Characterisation of the SPS Wire Scanner

    CERN Document Server

    AUTHOR|(CDS)2091911; Prof. Sillanpää, Mika

    As a beam diagnostic tool, the SPS wire scanner interacts with the proton bunches traversing the vacuum pipes of the Super Proton Synchrotron particle accelerator. Following the interaction, the bunches decelerate or experience momentum kicks off-axis and couple energy to the cavity walls, resonances and to the diagnostic tool, the scanning wire. The beam coupling impedance and, in particular, the beam induced heating of the wire motivate the characterisation and redesign of the SPS wire scanner. In this thesis, we characterise RF-wise the low frequency modes of the SPS wire scanner. These have the highest contribution to the impedance. We measure the cavity modes in terms of resonance frequency and quality factor by traditional measurement techniques and data analysis. We carry out a 4-port measurement to evaluate the beam coupling to the scanning wire, that yields the spectral heating power. If combined with the simulations, one is able to extract the beam coupling impedance and deduce the spectral dissipa...

  4. Scanner-based macroscopic color variation estimation

    Science.gov (United States)

    Kuo, Chunghui; Lai, Di; Zeise, Eric

    2006-01-01

    Flatbed scanners have been adopted successfully in the measurement of microscopic image artifacts, such as granularity and mottle, in print samples because of their capability of providing full color, high resolution images. Accurate macroscopic color measurement relies on the use of colorimeters or spectrophotometers to provide a surrogate for human vision. The very different color response characteristics of flatbed scanners from any standard colorimetric response limits the utility of a flatbed scanner as a macroscopic color measuring device. This metamerism constraint can be significantly relaxed if our objective is mainly to quantify the color variations within a printed page or between pages where a small bias in measured colors can be tolerated as long as the color distributions relative to the individual mean values is similar. Two scenarios when converting color from the device RGB color space to a standardized color space such as CIELab are studied in this paper, blind and semi-blind color transformation, depending on the availability of the black channel information. We will show that both approaches offer satisfactory results in quantifying macroscopic color variation across pages while the semi-blind color transformation further provides fairly accurate color prediction capability.

  5. Discovering biological progression underlying microarray samples.

    Directory of Open Access Journals (Sweden)

    Peng Qiu

    2011-04-01

    Full Text Available In biological systems that undergo processes such as differentiation, a clear concept of progression exists. We present a novel computational approach, called Sample Progression Discovery (SPD, to discover patterns of biological progression underlying microarray gene expression data. SPD assumes that individual samples of a microarray dataset are related by an unknown biological process (i.e., differentiation, development, cell cycle, disease progression, and that each sample represents one unknown point along the progression of that process. SPD aims to organize the samples in a manner that reveals the underlying progression and to simultaneously identify subsets of genes that are responsible for that progression. We demonstrate the performance of SPD on a variety of microarray datasets that were generated by sampling a biological process at different points along its progression, without providing SPD any information of the underlying process. When applied to a cell cycle time series microarray dataset, SPD was not provided any prior knowledge of samples' time order or of which genes are cell-cycle regulated, yet SPD recovered the correct time order and identified many genes that have been associated with the cell cycle. When applied to B-cell differentiation data, SPD recovered the correct order of stages of normal B-cell differentiation and the linkage between preB-ALL tumor cells with their cell origin preB. When applied to mouse embryonic stem cell differentiation data, SPD uncovered a landscape of ESC differentiation into various lineages and genes that represent both generic and lineage specific processes. When applied to a prostate cancer microarray dataset, SPD identified gene modules that reflect a progression consistent with disease stages. SPD may be best viewed as a novel tool for synthesizing biological hypotheses because it provides a likely biological progression underlying a microarray dataset and, perhaps more importantly, the

  6. Research on hyperspectral dynamic scene and image sequence simulation

    Science.gov (United States)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  7. Using hyperspectral imaging technology to identify diseased tomato leaves

    Science.gov (United States)

    Li, Cuiling; Wang, Xiu; Zhao, Xueguan; Meng, Zhijun; Zou, Wei

    2016-11-01

    In the process of tomato plants growth, due to the effect of plants genetic factors, poor environment factors, or disoperation of parasites, there will generate a series of unusual symptoms on tomato plants from physiology, organization structure and external form, as a result, they cannot grow normally, and further to influence the tomato yield and economic benefits. Hyperspectral image usually has high spectral resolution, not only contains spectral information, but also contains the image information, so this study adopted hyperspectral imaging technology to identify diseased tomato leaves, and developed a simple hyperspectral imaging system, including a halogen lamp light source unit, a hyperspectral image acquisition unit and a data processing unit. Spectrometer detection wavelength ranged from 400nm to 1000nm. After hyperspectral images of tomato leaves being captured, it was needed to calibrate hyperspectral images. This research used spectrum angle matching method and spectral red edge parameters discriminant method respectively to identify diseased tomato leaves. Using spectral red edge parameters discriminant method produced higher recognition accuracy, the accuracy was higher than 90%. Research results have shown that using hyperspectral imaging technology to identify diseased tomato leaves is feasible, and provides the discriminant basis for subsequent disease control of tomato plants.

  8. Particle discrimination by an automatic scanner for nuclear emulsion plates

    International Nuclear Information System (INIS)

    Heinecke, W.; Fischer, B.E.

    1976-01-01

    An automatic scanner for nuclear emulsion plates has been improved by adding particle discrimination. By determination of the mean luminosity of tracks in darkfield illumination in addition to the track length a clear discrimination has been obtained, at least for lighter particles. The scanning speed of the original automatic scanner has not been reduced. The scanner works up to 200 times faster than a human scanner. Besides the particle discrimination the determination of the mean track luminosity led to a lower perturbation sensitivity with respect to a high background of accidentally developed silvergrains, scratches in emulsion etc. The reproducibility of the results obtained by the automatic scanner is better than 5%. (Auth.)

  9. Scanners for analytic print measurement: the devil in the details

    Science.gov (United States)

    Zeise, Eric K.; Williams, Don; Burns, Peter D.; Kress, William C.

    2007-01-01

    Inexpensive and easy-to-use linear and area-array scanners have frequently substituted as colorimeters and densitometers for low-frequency (i.e., large area) hard copy image measurement. Increasingly, scanners are also being used for high spatial frequency, image microstructure measurements, which were previously reserved for high performance microdensitometers. In this paper we address characteristics of flatbed reflection scanners in the evaluation of print uniformity, geometric distortion, geometric repeatability and the influence of scanner MTF and noise on analytic measurements. Suggestions are made for the specification and evaluation of scanners to be used in print image quality standards that are being developed.

  10. The use of microarrays in microbial ecology

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, G.L.; He, Z.; DeSantis, T.Z.; Brodie, E.L.; Zhou, J.

    2009-09-15

    Microarrays have proven to be a useful and high-throughput method to provide targeted DNA sequence information for up to many thousands of specific genetic regions in a single test. A microarray consists of multiple DNA oligonucleotide probes that, under high stringency conditions, hybridize only to specific complementary nucleic acid sequences (targets). A fluorescent signal indicates the presence and, in many cases, the abundance of genetic regions of interest. In this chapter we will look at how microarrays are used in microbial ecology, especially with the recent increase in microbial community DNA sequence data. Of particular interest to microbial ecologists, phylogenetic microarrays are used for the analysis of phylotypes in a community and functional gene arrays are used for the analysis of functional genes, and, by inference, phylotypes in environmental samples. A phylogenetic microarray that has been developed by the Andersen laboratory, the PhyloChip, will be discussed as an example of a microarray that targets the known diversity within the 16S rRNA gene to determine microbial community composition. Using multiple, confirmatory probes to increase the confidence of detection and a mismatch probe for every perfect match probe to minimize the effect of cross-hybridization by non-target regions, the PhyloChip is able to simultaneously identify any of thousands of taxa present in an environmental sample. The PhyloChip is shown to reveal greater diversity within a community than rRNA gene sequencing due to the placement of the entire gene product on the microarray compared with the analysis of up to thousands of individual molecules by traditional sequencing methods. A functional gene array that has been developed by the Zhou laboratory, the GeoChip, will be discussed as an example of a microarray that dynamically identifies functional activities of multiple members within a community. The recent version of GeoChip contains more than 24,000 50mer

  11. 3D Biomaterial Microarrays for Regenerative Medicine

    DEFF Research Database (Denmark)

    Gaharwar, Akhilesh K.; Arpanaei, Ayyoob; Andresen, Thomas Lars

    2015-01-01

    Three dimensional (3D) biomaterial microarrays hold enormous promise for regenerative medicine because of their ability to accelerate the design and fabrication of biomimetic materials. Such tissue-like biomaterials can provide an appropriate microenvironment for stimulating and controlling stem...... for tissue engineering and drug screening applications....... cell differentiation into tissue-specifi c lineages. The use of 3D biomaterial microarrays can, if optimized correctly, result in a more than 1000-fold reduction in biomaterials and cells consumption when engineering optimal materials combinations, which makes these miniaturized systems very attractive...

  12. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  13. Inland excess water mapping using hyperspectral imagery

    Directory of Open Access Journals (Sweden)

    Csendes Bálint

    2016-01-01

    Full Text Available Hyperspectral imaging combined with the potentials of airborne scanning is a powerful tool to monitor environmental processes. The aim of this research was to use high resolution remotely sensed data to map the spatial extent of inland excess water patches in a Hungarian study area that is known for its oil and gas production facilities. Periodic floodings show high spatial and temporal variability, nevertheless, former studies have proven that the affected soil surfaces can be accurately identified. Besides separability measurements, we performed spectral angle classification, which gave a result of 85% overall accuracy and we also compared the generated land cover map with LIDAR elevation data.

  14. Snapshot hyperspectral imaging and practical applications

    International Nuclear Information System (INIS)

    Wong, G

    2009-01-01

    Traditional broadband imaging involves the digital representation of a remote scene within a reduced colour space. Hyperspectral imaging exploits the full spectral dimension, which better reflects the continuous nature of actual spectra. Conventional techniques are all time-delayed whereby spatial or spectral scanning is required for hypercube generation. An innovative and patented technique developed at Heriot-Watt University offers significant potential as a snapshot sensor, to enable benefits for the wider public beyond aerospace imaging. This student-authored paper seeks to promote awareness of this field within the photonic community and its potential advantages for real-time practical applications.

  15. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  16. Development and application of a microarray meter tool to optimize microarray experiments

    Directory of Open Access Journals (Sweden)

    Rouse Richard JD

    2008-07-01

    Full Text Available Abstract Background Successful microarray experimentation requires a complex interplay between the slide chemistry, the printing pins, the nucleic acid probes and targets, and the hybridization milieu. Optimization of these parameters and a careful evaluation of emerging slide chemistries are a prerequisite to any large scale array fabrication effort. We have developed a 'microarray meter' tool which assesses the inherent variations associated with microarray measurement prior to embarking on large scale projects. Findings The microarray meter consists of nucleic acid targets (reference and dynamic range control and probe components. Different plate designs containing identical probe material were formulated to accommodate different robotic and pin designs. We examined the variability in probe quality and quantity (as judged by the amount of DNA printed and remaining post-hybridization using three robots equipped with capillary printing pins. Discussion The generation of microarray data with minimal variation requires consistent quality control of the (DNA microarray manufacturing and experimental processes. Spot reproducibility is a measure primarily of the variations associated with printing. The microarray meter assesses array quality by measuring the DNA content for every feature. It provides a post-hybridization analysis of array quality by scoring probe performance using three metrics, a a measure of variability in the signal intensities, b a measure of the signal dynamic range and c a measure of variability of the spot morphologies.

  17. The number and distribution of computerised tomography scanners in Turkey

    International Nuclear Information System (INIS)

    Semin, S.; Amato, Z.

    1999-01-01

    The goal of this study was to investigate the number and distribution of CT scanners in Turkey. Our results show 173 CT scanners in Turkey in 1994, which equals 2.9 scanners per million people. All of the scanners are located in 45 cities, where 81 % of the population resides. The other 31 cities in Turkey have no scanners. Of the 173 scanners, 103 (59.6 %) are owned by the private sector and the other 70 are owned by the public sector. Of Turkey's CT scanners, 49.2 % are located in private health centres, 21.9 % in university hospitals, 16.7 % in Ministry of Health (MOH) hospitals, 10.4 % in private hospitals and 1.8 % in social security hospitals. (orig.)

  18. Microarray Я US: a user-friendly graphical interface to Bioconductor tools that enables accurate microarray data analysis and expedites comprehensive functional analysis of microarray results.

    Science.gov (United States)

    Dai, Yilin; Guo, Ling; Li, Meng; Chen, Yi-Bu

    2012-06-08

    Microarray data analysis presents a significant challenge to researchers who are unable to use the powerful Bioconductor and its numerous tools due to their lack of knowledge of R language. Among the few existing software programs that offer a graphic user interface to Bioconductor packages, none have implemented a comprehensive strategy to address the accuracy and reliability issue of microarray data analysis due to the well known probe design problems associated with many widely used microarray chips. There is also a lack of tools that would expedite the functional analysis of microarray results. We present Microarray Я US, an R-based graphical user interface that implements over a dozen popular Bioconductor packages to offer researchers a streamlined workflow for routine differential microarray expression data analysis without the need to learn R language. In order to enable a more accurate analysis and interpretation of microarray data, we incorporated the latest custom probe re-definition and re-annotation for Affymetrix and Illumina chips. A versatile microarray results output utility tool was also implemented for easy and fast generation of input files for over 20 of the most widely used functional analysis software programs. Coupled with a well-designed user interface, Microarray Я US leverages cutting edge Bioconductor packages for researchers with no knowledge in R language. It also enables a more reliable and accurate microarray data analysis and expedites downstream functional analysis of microarray results.

  19. Principles of gene microarray data analysis.

    Science.gov (United States)

    Mocellin, Simone; Rossi, Carlo Riccardo

    2007-01-01

    The development of several gene expression profiling methods, such as comparative genomic hybridization (CGH), differential display, serial analysis of gene expression (SAGE), and gene microarray, together with the sequencing of the human genome, has provided an opportunity to monitor and investigate the complex cascade of molecular events leading to tumor development and progression. The availability of such large amounts of information has shifted the attention of scientists towards a nonreductionist approach to biological phenomena. High throughput technologies can be used to follow changing patterns of gene expression over time. Among them, gene microarray has become prominent because it is easier to use, does not require large-scale DNA sequencing, and allows for the parallel quantification of thousands of genes from multiple samples. Gene microarray technology is rapidly spreading worldwide and has the potential to drastically change the therapeutic approach to patients affected with tumor. Therefore, it is of paramount importance for both researchers and clinicians to know the principles underlying the analysis of the huge amount of data generated with microarray technology.

  20. Detection of selected plant viruses by microarrays

    OpenAIRE

    HRABÁKOVÁ, Lenka

    2013-01-01

    The main aim of this master thesis was the simultaneous detection of four selected plant viruses ? Apple mosaic virus, Plum pox virus, Prunus necrotic ringspot virus and Prune harf virus, by microarrays. The intermediate step in the process of the detection was optimizing of multiplex polymerase chain reaction (PCR).

  1. LNA-modified isothermal oligonucleotide microarray for ...

    Indian Academy of Sciences (India)

    2014-10-20

    Oct 20, 2014 ... the advent of DNA microarray techniques (Lee et al. 2007). ... atoms of ribose to form a bicyclic ribosyl structure. It is the .... 532 nm and emission at 570 nm. The signal ..... sis and validation using real-time PCR. Nucleic Acids ...

  2. Gene Expression Analysis Using Agilent DNA Microarrays

    DEFF Research Database (Denmark)

    Stangegaard, Michael

    2009-01-01

    Hybridization of labeled cDNA to microarrays is an intuitively simple and a vastly underestimated process. If it is not performed, optimized, and standardized with the same attention to detail as e.g., RNA amplification, information may be overlooked or even lost. Careful balancing of the amount ...

  3. Microarrays (DNA Chips) for the Classroom Laboratory

    Science.gov (United States)

    Barnard, Betsy; Sussman, Michael; BonDurant, Sandra Splinter; Nienhuis, James; Krysan, Patrick

    2006-01-01

    We have developed and optimized the necessary laboratory materials to make DNA microarray technology accessible to all high school students at a fraction of both cost and data size. The primary component is a DNA chip/array that students "print" by hand and then analyze using research tools that have been adapted for classroom use. The…

  4. Comparing transformation methods for DNA microarray data

    NARCIS (Netherlands)

    Thygesen, Helene H.; Zwinderman, Aeilko H.

    2004-01-01

    Background: When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include

  5. Dried fruits quality assessment by hyperspectral imaging

    Science.gov (United States)

    Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe

    2012-05-01

    Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.

  6. Hyperspectral monitoring of chemically sensitive plant sentinels

    Science.gov (United States)

    Simmons, Danielle A.; Kerekes, John P.; Raqueno, Nina G.

    2009-08-01

    Automated detection of chemical threats is essential for an early warning of a potential attack. Harnessing plants as bio-sensors allows for distributed sensing without a power supply. Monitoring the bio-sensors requires a specifically tailored hyperspectral system. Tobacco plants have been genetically engineered to de-green when a material of interest (e.g. zinc, TNT) is introduced to their immediate vicinity. The reflectance spectra of the bio-sensors must be accurately characterized during the de-greening process for them to play a role in an effective warning system. Hyperspectral data have been collected under laboratory conditions to determine the key regions in the reflectance spectra associated with the degreening phenomenon. Bio-sensor plants and control (nongenetically engineered) plants were exposed to TNT over the course of two days and their spectra were measured every six hours. Rochester Institute of Technologys Digital Imaging and Remote Sensing Image Generation Model (DIRSIG) was used to simulate detection of de-greened plants in the field. The simulated scene contains a brick school building, sidewalks, trees and the bio-sensors placed at the entrances to the buildings. Trade studies of the bio-sensor monitoring system were also conducted using DIRSIG simulations. System performance was studied as a function of field of view, pixel size, illumination conditions, radiometric noise, spectral waveband dependence and spectral resolution. Preliminary results show that the most significant change in reflectance during the degreening period occurs in the near infrared region.

  7. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  8. Geometric correction of APEX hyperspectral data

    Directory of Open Access Journals (Sweden)

    Vreys Kristin

    2016-03-01

    Full Text Available Hyperspectral imagery originating from airborne sensors is nowadays widely used for the detailed characterization of land surface. The correct mapping of the pixel positions to ground locations largely contributes to the success of the applications. Accurate geometric correction, also referred to as “orthorectification”, is thus an important prerequisite which must be performed prior to using airborne imagery for evaluations like change detection, or mapping or overlaying the imagery with existing data sets or maps. A so-called “ortho-image” provides an accurate representation of the earth’s surface, having been adjusted for lens distortions, camera tilt and topographic relief. In this paper, we describe the different steps in the geometric correction process of APEX hyperspectral data, as applied in the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. APEX ortho-images are generated through direct georeferencing of the raw images, thereby making use of sensor interior and exterior orientation data, boresight calibration data and elevation data. They can be referenced to any userspecified output projection system and can be resampled to any output pixel size.

  9. APEX - the Hyperspectral ESA Airborne Prism Experiment

    Directory of Open Access Journals (Sweden)

    Koen Meuleman

    2008-10-01

    Full Text Available The airborne ESA-APEX (Airborne Prism Experiment hyperspectral mission simulator is described with its distinct specifications to provide high quality remote sensing data. The concept of an automatic calibration, performed in the Calibration Home Base (CHB by using the Control Test Master (CTM, the In-Flight Calibration facility (IFC, quality flagging (QF and specific processing in a dedicated Processing and Archiving Facility (PAF, and vicarious calibration experiments are presented. A preview on major applications and the corresponding development efforts to provide scientific data products up to level 2/3 to the user is presented for limnology, vegetation, aerosols, general classification routines and rapid mapping tasks. BRDF (Bidirectional Reflectance Distribution Function issues are discussed and the spectral database SPECCHIO (Spectral Input/Output introduced. The optical performance as well as the dedicated software utilities make APEX a state-of-the-art hyperspectral sensor, capable of (a satisfying the needs of several research communities and (b helping the understanding of the Earth’s complex mechanisms.

  10. Identifying Fishes through DNA Barcodes and Microarrays.

    Directory of Open Access Journals (Sweden)

    Marc Kochzius

    2010-09-01

    Full Text Available International fish trade reached an import value of 62.8 billion Euro in 2006, of which 44.6% are covered by the European Union. Species identification is a key problem throughout the life cycle of fishes: from eggs and larvae to adults in fisheries research and control, as well as processed fish products in consumer protection.This study aims to evaluate the applicability of the three mitochondrial genes 16S rRNA (16S, cytochrome b (cyt b, and cytochrome oxidase subunit I (COI for the identification of 50 European marine fish species by combining techniques of "DNA barcoding" and microarrays. In a DNA barcoding approach, neighbour Joining (NJ phylogenetic trees of 369 16S, 212 cyt b, and 447 COI sequences indicated that cyt b and COI are suitable for unambiguous identification, whereas 16S failed to discriminate closely related flatfish and gurnard species. In course of probe design for DNA microarray development, each of the markers yielded a high number of potentially species-specific probes in silico, although many of them were rejected based on microarray hybridisation experiments. None of the markers provided probes to discriminate the sibling flatfish and gurnard species. However, since 16S-probes were less negatively influenced by the "position of label" effect and showed the lowest rejection rate and the highest mean signal intensity, 16S is more suitable for DNA microarray probe design than cty b and COI. The large portion of rejected COI-probes after hybridisation experiments (>90% renders the DNA barcoding marker as rather unsuitable for this high-throughput technology.Based on these data, a DNA microarray containing 64 functional oligonucleotide probes for the identification of 30 out of the 50 fish species investigated was developed. It represents the next step towards an automated and easy-to-handle method to identify fish, ichthyoplankton, and fish products.

  11. Facilitating functional annotation of chicken microarray data

    Directory of Open Access Journals (Sweden)

    Gresham Cathy R

    2009-10-01

    Full Text Available Abstract Background Modeling results from chicken microarray studies is challenging for researchers due to little functional annotation associated with these arrays. The Affymetrix GenChip chicken genome array, one of the biggest arrays that serve as a key research tool for the study of chicken functional genomics, is among the few arrays that link gene products to Gene Ontology (GO. However the GO annotation data presented by Affymetrix is incomplete, for example, they do not show references linked to manually annotated functions. In addition, there is no tool that facilitates microarray researchers to directly retrieve functional annotations for their datasets from the annotated arrays. This costs researchers amount of time in searching multiple GO databases for functional information. Results We have improved the breadth of functional annotations of the gene products associated with probesets on the Affymetrix chicken genome array by 45% and the quality of annotation by 14%. We have also identified the most significant diseases and disorders, different types of genes, and known drug targets represented on Affymetrix chicken genome array. To facilitate functional annotation of other arrays and microarray experimental datasets we developed an Array GO Mapper (AGOM tool to help researchers to quickly retrieve corresponding functional information for their dataset. Conclusion Results from this study will directly facilitate annotation of other chicken arrays and microarray experimental datasets. Researchers will be able to quickly model their microarray dataset into more reliable biological functional information by using AGOM tool. The disease, disorders, gene types and drug targets revealed in the study will allow researchers to learn more about how genes function in complex biological systems and may lead to new drug discovery and development of therapies. The GO annotation data generated will be available for public use via AgBase website and

  12. Hyperspectral stimulated emission depletion microscopy and methods of use thereof

    Science.gov (United States)

    Timlin, Jerilyn A; Aaron, Jesse S

    2014-04-01

    A hyperspectral stimulated emission depletion ("STED") microscope system for high-resolution imaging of samples labeled with multiple fluorophores (e.g., two to ten fluorophores). The hyperspectral STED microscope includes a light source, optical systems configured for generating an excitation light beam and a depletion light beam, optical systems configured for focusing the excitation and depletion light beams on a sample, and systems for collecting and processing data generated by interaction of the excitation and depletion light beams with the sample. Hyperspectral STED data may be analyzed using multivariate curve resolution analysis techniques to deconvolute emission from the multiple fluorophores. The hyperspectral STED microscope described herein can be used for multi-color, subdiffraction imaging of samples (e.g., materials and biological materials) and for analyzing a tissue by Forster Resonance Energy Transfer ("FRET").

  13. Hyperspectral imaging for non-contact analysis of forensic traces

    NARCIS (Netherlands)

    Edelman, G. J.; Gaston, E.; van Leeuwen, T. G.; Cullen, P. J.; Aalders, M. C. G.

    2012-01-01

    Hyperspectral imaging (HSI) integrates conventional imaging and spectroscopy, to obtain both spatial and spectral information from a specimen. This technique enables investigators to analyze the chemical composition of traces and simultaneously visualize their spatial distribution. HSI offers

  14. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    Science.gov (United States)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  15. Upconversion applied for mid-IR hyperspectral image acquisition

    DEFF Research Database (Denmark)

    Tidemand-Lichtenberg, Peter; Kehlet, Louis Martinus; Sanders, Nicolai Højer

    2015-01-01

    Different schemes for upconversion mid-IR hyperspectral imaging is implemented and compared in terms of spectral coverage, spectral resolution, speed and noise. Phasematch scanning and scanning of the object within the field of view is considered....

  16. A survey of landmine detection using hyperspectral imaging

    Science.gov (United States)

    Makki, Ihab; Younes, Rafic; Francis, Clovis; Bianchi, Tiziano; Zucchetti, Massimo

    2017-02-01

    Hyperspectral imaging is a trending technique in remote sensing that finds its application in many different areas, such as agriculture, mapping, target detection, food quality monitoring, etc. This technique gives the ability to remotely identify the composition of each pixel of the image. Therefore, it is a natural candidate for the purpose of landmine detection, thanks to its inherent safety and fast response time. In this paper, we will present the results of several studies that employed hyperspectral imaging for the purpose of landmine detection, discussing the different signal processing techniques used in this framework for hyperspectral image processing and target detection. Our purpose is to highlight the progresses attained in the detection of landmines using hyperspectral imaging and to identify possible perspectives for future work, in order to achieve a better detection in real-time operation mode.

  17. Hyperspectral image classification based on local binary patterns and PCANet

    Science.gov (United States)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  18. Parallel Hyperspectral Image Processing on Distributed Multi-Cluster Systems

    NARCIS (Netherlands)

    Liu, F.; Seinstra, F.J.; Plaza, A.J.

    2011-01-01

    Computationally efficient processing of hyperspectral image cubes can be greatly beneficial in many application domains, including environmental modeling, risk/hazard prevention and response, and defense/security. As individual cluster computers often cannot satisfy the computational demands of

  19. Automated Feature Extraction from Hyperspectral Imagery, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed activities will result in the development of a novel hyperspectral feature-extraction toolkit that will provide a simple, automated, and accurate...

  20. Automated Feature Extraction from Hyperspectral Imagery, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA Topic S7.01, Visual Learning Systems, Inc. (VLS) will develop a novel hyperspectral plug-in toolkit for its award winning Feature AnalystREG...

  1. Dental caries imaging using hyperspectral stimulated Raman scattering microscopy

    Science.gov (United States)

    Wang, Zi; Zheng, Wei; Jian, Lin; Huang, Zhiwei

    2016-03-01

    We report the development of a polarization-resolved hyperspectral stimulated Raman scattering (SRS) imaging technique based on a picosecond (ps) laser-pumped optical parametric oscillator system for label-free imaging of dental caries. In our imaging system, hyperspectral SRS images (512×512 pixels) in both fingerprint region (800-1800 cm-1) and high-wavenumber region (2800-3600 cm-1) are acquired in minutes by scanning the wavelength of OPO output, which is a thousand times faster than conventional confocal micro Raman imaging. SRS spectra variations from normal enamel to caries obtained from the hyperspectral SRS images show the loss of phosphate and carbonate in the carious region. While polarization-resolved SRS images at 959 cm-1 demonstrate that the caries has higher depolarization ratio. Our results demonstrate that the polarization resolved-hyperspectral SRS imaging technique developed allows for rapid identification of the biochemical and structural changes of dental caries.

  2. Recent micro-CT scanner developments at UGCT

    Energy Technology Data Exchange (ETDEWEB)

    Dierick, Manuel, E-mail: Manuel.Dierick@UGent.be [UGCT-Department of Physics and Astronomy, Faculty of Sciences, Ghent University, Proeftuinstraat 86, 9000 Ghent (Belgium); XRE, X-Ray Engineering bvba, De Pintelaan 111, 9000 Ghent (Belgium); Van Loo, Denis, E-mail: info@XRE.be [XRE, X-Ray Engineering bvba, De Pintelaan 111, 9000 Ghent (Belgium); Masschaele, Bert [UGCT-Department of Physics and Astronomy, Faculty of Sciences, Ghent University, Proeftuinstraat 86, 9000 Ghent (Belgium); XRE, X-Ray Engineering bvba, De Pintelaan 111, 9000 Ghent (Belgium); Van den Bulcke, Jan [UGCT-Woodlab-UGent, Department of Forest and Water Management, Faculty of Bioscience Engineering, Ghent University, Coupure Links 653, 9000 Ghent (Belgium); Van Acker, Joris, E-mail: Joris.VanAcker@UGent.be [UGCT-Woodlab-UGent, Department of Forest and Water Management, Faculty of Bioscience Engineering, Ghent University, Coupure Links 653, 9000 Ghent (Belgium); Cnudde, Veerle, E-mail: Veerle.Cnudde@UGent.be [UGCT-SGIG, Department of Geology and Soil Science, Faculty of Sciences, Ghent University, Krijgslaan 281, S8, 9000 Ghent (Belgium); Van Hoorebeke, Luc, E-mail: Luc.VanHoorebeke@UGent.be [UGCT-Department of Physics and Astronomy, Faculty of Sciences, Ghent University, Proeftuinstraat 86, 9000 Ghent (Belgium)

    2014-04-01

    This paper describes two X-ray micro-CT scanners which were recently developed to extend the experimental possibilities of microtomography research at the Centre for X-ray Tomography ( (www.ugct.ugent.be)) of the Ghent University (Belgium). The first scanner, called Nanowood, is a wide-range CT scanner with two X-ray sources (160 kV{sub max}) and two detectors, resolving features down to 0.4 μm in small samples, but allowing samples up to 35 cm to be scanned. This is a sample size range of 3 orders of magnitude, making this scanner well suited for imaging multi-scale materials such as wood, stone, etc. Besides the traditional cone-beam acquisition, Nanowood supports helical acquisition, and it can generate images with significant phase-contrast contributions. The second scanner, known as the Environmental micro-CT scanner (EMCT), is a gantry based micro-CT scanner with variable magnification for scanning objects which are not easy to rotate in a standard micro-CT scanner, for example because they are physically connected to external experimental hardware such as sensor wiring, tubing or others. This scanner resolves 5 μm features, covers a field-of-view of about 12 cm wide with an 80 cm vertical travel range. Both scanners will be extensively described and characterized, and their potential will be demonstrated with some key application results.

  3. Recent micro-CT scanner developments at UGCT

    International Nuclear Information System (INIS)

    Dierick, Manuel; Van Loo, Denis; Masschaele, Bert; Van den Bulcke, Jan; Van Acker, Joris; Cnudde, Veerle; Van Hoorebeke, Luc

    2014-01-01

    This paper describes two X-ray micro-CT scanners which were recently developed to extend the experimental possibilities of microtomography research at the Centre for X-ray Tomography ( (www.ugct.ugent.be)) of the Ghent University (Belgium). The first scanner, called Nanowood, is a wide-range CT scanner with two X-ray sources (160 kV max ) and two detectors, resolving features down to 0.4 μm in small samples, but allowing samples up to 35 cm to be scanned. This is a sample size range of 3 orders of magnitude, making this scanner well suited for imaging multi-scale materials such as wood, stone, etc. Besides the traditional cone-beam acquisition, Nanowood supports helical acquisition, and it can generate images with significant phase-contrast contributions. The second scanner, known as the Environmental micro-CT scanner (EMCT), is a gantry based micro-CT scanner with variable magnification for scanning objects which are not easy to rotate in a standard micro-CT scanner, for example because they are physically connected to external experimental hardware such as sensor wiring, tubing or others. This scanner resolves 5 μm features, covers a field-of-view of about 12 cm wide with an 80 cm vertical travel range. Both scanners will be extensively described and characterized, and their potential will be demonstrated with some key application results

  4. Recent advances in segmented gamma scanner analysis

    International Nuclear Information System (INIS)

    Sprinkle, J.K. Jr.; Hsue, S.T.

    1987-01-01

    The segmented gamma scanner (SGS) is used in many facilities to assay low-density scrap and waste generated in the facilities. The procedures for using the SGS can cause a negative bias if the sample does not satisfy the assumptions made in the method. Some process samples do not comply with the assumptions. This paper discusses the effect of the presence of lumps on the SGS assay results, describes a method to detect the presence of lumps, and describes an approach to correct for the lumps. Other recent advances in SGS analysis are also discussed

  5. Fast wire scanner for intense electron beams

    Directory of Open Access Journals (Sweden)

    T. Moore

    2014-02-01

    Full Text Available We have developed a cost-effective, fast rotating wire scanner for use in accelerators where high beam currents would otherwise melt even carbon wires. This new design uses a simple planetary gear setup to rotate a carbon wire, fixed at one end, through the beam at speeds in excess of 20  m/s. We present results from bench tests, as well as transverse beam profile measurements taken at Cornell’s high-brightness energy recovery linac photoinjector, for beam currents up to 35 mA.

  6. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  7. Processing of hyperspectral medical images applications in dermatology using Matlab

    CERN Document Server

    Koprowski, Robert

    2017-01-01

    This book presents new methods of analyzing and processing hyperspectral medical images, which can be used in diagnostics, for example for dermatological images. The algorithms proposed are fully automatic and the results obtained are fully reproducible. Their operation was tested on a set of several thousands of hyperspectral images and they were implemented in Matlab. The presented source code can be used without licensing restrictions. This is a valuable resource for computer scientists, bioengineers, doctoral students, and dermatologists interested in contemporary analysis methods.

  8. HYPERSPECTRAL HYPERION IMAGERY ANALYSIS AND ITS APPLICATION USING SPECTRAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    W. Pervez

    2015-03-01

    Full Text Available Rapid advancement in remote sensing open new avenues to explore the hyperspectral Hyperion imagery pre-processing techniques, analysis and application for land use mapping. The hyperspectral data consists of 242 bands out of which 196 calibrated/useful bands are available for hyperspectral applications. Atmospheric correction applied to the hyperspectral calibrated bands make the data more useful for its further processing/ application. Principal component (PC analysis applied to the hyperspectral calibrated bands reduced the dimensionality of the data and it is found that 99% of the data is held in first 10 PCs. Feature extraction is one of the important application by using vegetation delineation and normalized difference vegetation index. The machine learning classifiers uses the technique to identify the pixels having significant difference in the spectral signature which is very useful for classification of an image. Supervised machine learning classifier technique has been used for classification of hyperspectral image which resulted in overall efficiency of 86.6703 and Kappa co-efficient of 0.7998.

  9. Circumference estimation using 3D-whole body scanners and shadow scanner

    NARCIS (Netherlands)

    Daanen, H.A.M.

    1998-01-01

    Clothing designers and manufacturers use traditional body dimensions as their basis. When 3D-whole body scanners are introduced to determine the body dimensions, a conversion has to be made, since scan determined circumference measures are slightly larger than the traditional values. This pilot

  10. 3D Laser Scanner for Underwater Manipulation

    Directory of Open Access Journals (Sweden)

    Albert Palomer

    2018-04-01

    Full Text Available Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS is used to autonomously grasp an object from the bottom of a water tank.

  11. High-picture quality industrial CT scanner

    International Nuclear Information System (INIS)

    Shoji, Takao; Nishide, Akihiko; Fujii, Masashi.

    1989-01-01

    Industrial X-ray-CT-scanners, which provide cross-sectional images of a tested sample without destroying it, are attracting attention as a new nondestructive inspection device. In 1982, Toshiba commenced the development of industrial CT scanners, and introduced the 'TOSCANER' -3000 and-4000 series. Now, the state of the art 'TOSCANER'-20000 series of CT systems has been developed incorporating the latest computer tomography and image processing technology, such as the T9506 image processor. One of the advantages of this system is its applicability to a wide range of X-ray energy . The 'TOSCANER'-20000 series can be utilized for inspecting castings and other materials with relatively low-transparency to X-rays, as well as ceramics, composite materials and other materials with high X-ray transparency. A further feature of the new system is its high-picture quality, with a high-spatial resolution resulting from a pixel size of 0.2x0.2(mm). (author)

  12. 3D Laser Scanner for Underwater Manipulation.

    Science.gov (United States)

    Palomer, Albert; Ridao, Pere; Youakim, Dina; Ribas, David; Forest, Josep; Petillot, Yvan

    2018-04-04

    Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank.

  13. Interferometric Laser Scanner for Direction Determination

    Directory of Open Access Journals (Sweden)

    Gennady Kaloshin

    2016-01-01

    Full Text Available In this paper, we explore the potential capabilities of new laser scanning-based method for direction determination. The method for fully coherent beams is extended to the case when interference pattern is produced in the turbulent atmosphere by two partially coherent sources. The performed theoretical analysis identified the conditions under which stable pattern may form on extended paths of 0.5–10 km in length. We describe a method for selecting laser scanner parameters, ensuring the necessary operability range in the atmosphere for any possible turbulence characteristics. The method is based on analysis of the mean intensity of interference pattern, formed by two partially coherent sources of optical radiation. Visibility of interference pattern is estimated as a function of propagation pathlength, structure parameter of atmospheric turbulence, and spacing of radiation sources, producing the interference pattern. It is shown that, when atmospheric turbulences are moderately strong, the contrast of interference pattern of laser scanner may ensure its applicability at ranges up to 10 km.

  14. A 3D airborne ultrasound scanner

    Science.gov (United States)

    Capineri, L.; Masotti, L.; Rocchi, S.

    1998-06-01

    This work investigates the feasibility of an ultrasound scanner designed to reconstruct three-dimensional profiles of objects in air. There are many industrial applications in which it is important to obtain quickly and accurately the digital reconstruction of solid objects with contactless methods. The final aim of this project was the profile reconstruction of shoe lasts in order to eliminate the mechanical tracers from the reproduction process of shoe prototypes. The feasibility of an ultrasonic scanner was investigated in laboratory conditions on wooden test objects with axial symmetry. A bistatic system based on five airborne polyvinylidenedifluoride (PVDF) transducers was mechanically moved to emulate a cylindrical array transducer that can host objects of maximum width and height 20 cm and 40 cm respectively. The object reconstruction was based on a simplified version of the synthetic aperture focusing technique (SAFT): the time of flight (TOF) of the first in time echo for each receiving transducer was taken into account, a coarse spatial sampling of the ultrasonic field reflected on the array transducer was delivered and the reconstruction algorithm was based on the ellipsoidal backprojection. Measurements on a wooden cone section provided submillimetre accuracy in a controlled environment.

  15. Interferometric Laser Scanner for Direction Determination

    Science.gov (United States)

    Kaloshin, Gennady; Lukin, Igor

    2016-01-01

    In this paper, we explore the potential capabilities of new laser scanning-based method for direction determination. The method for fully coherent beams is extended to the case when interference pattern is produced in the turbulent atmosphere by two partially coherent sources. The performed theoretical analysis identified the conditions under which stable pattern may form on extended paths of 0.5–10 km in length. We describe a method for selecting laser scanner parameters, ensuring the necessary operability range in the atmosphere for any possible turbulence characteristics. The method is based on analysis of the mean intensity of interference pattern, formed by two partially coherent sources of optical radiation. Visibility of interference pattern is estimated as a function of propagation pathlength, structure parameter of atmospheric turbulence, and spacing of radiation sources, producing the interference pattern. It is shown that, when atmospheric turbulences are moderately strong, the contrast of interference pattern of laser scanner may ensure its applicability at ranges up to 10 km. PMID:26805841

  16. Infrared hyperspectral imaging miniaturized for UAV applications

    Science.gov (United States)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-02-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. Also, an example of how this technology can easily be used to quantify a hydrocarbon gas leak's volume and mass flowrates. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4

  17. Hyperspectral Longwave Infrared Focal Plane Array and Camera Based on Quantum Well Infrared Photodetectors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a hyperspectral focal plane array and camera imaging in a large number of sharp hyperspectral bands in the thermal infrared. The camera is...

  18. Manifold learning based feature extraction for classification of hyper-spectral data

    CSIR Research Space (South Africa)

    Lunga, D

    2013-08-01

    Full Text Available Advances in hyperspectral sensing provide new capability for characterizing spectral signatures in a wide range of physical and biological systems, while inspiring new methods for extracting information from these data. Hyperspectral image data...

  19. Detection of environmental change using hyperspectral remote sensing at Olkiluoto repository site

    International Nuclear Information System (INIS)

    Tuominen, J.; Lipping, T.

    2011-03-01

    In this report methods related to hyperspectral monitoring of Olkiluoto repository site are described. A short introduction to environmental remote sensing is presented, followed by more detailed description of hyperspectral imaging and a review of applications of hyperspectral remote sensing presented in the literature. The trends of future hyperspectral imaging are discussed exploring the possibilities of long-wave infrared hyperspectral imaging. A detailed description of HYPE08 hyperspectral flight campaign at the Olkiluoto region in 2008 is presented. In addition, related pre-processing and atmospheric correction methods, necessary in monitoring use, and the quality control methods applied, are described. Various change detection methods presented in the literature are described, too. Finally, a system for hyperspectral monitoring is proposed. The system is based on continued hyperspectral airborne flight campaigns and precisely defined data processing procedure. (orig.)

  20. A lightweight hyperspectral mapping system and photogrammetric processing chain for unmanned aerial vehicles

    NARCIS (Netherlands)

    Suomalainen, J.M.; Anders, N.S.; Iqbal, S.; Roerink, G.J.; Franke, G.J.; Wenting, P.F.M.; Hünniger, D.; Bartholomeus, H.; Becker, R.; Kooistra, L.

    2014-01-01

    During the last years commercial hyperspectral imaging sensors have been miniaturized and their performance has been demonstrated on Unmanned Aerial Vehicles (UAV). However currently the commercial hyperspectral systems still require minimum payload capacity of approximately 3 kg, forcing usage of

  1. Technical Report: Unmanned Helicopter Solution for Survey-Grade Lidar and Hyperspectral Mapping

    Science.gov (United States)

    Kaňuk, Ján; Gallay, Michal; Eck, Christoph; Zgraggen, Carlo; Dvorný, Eduard

    2018-05-01

    Recent development of light-weight unmanned airborne vehicles (UAV) and miniaturization of sensors provide new possibilities for remote sensing and high-resolution mapping. Mini-UAV platforms are emerging, but powerful UAV platforms of higher payload capacity are required to carry the sensors for survey-grade mapping. In this paper, we demonstrate a technological solution and application of two different payloads for highly accurate and detailed mapping. The unmanned airborne system (UAS) comprises a Scout B1-100 autonomously operating UAV helicopter powered by a gasoline two-stroke engine with maximum take-off weight of 75 kg. The UAV allows for integrating of up to 18 kg of a customized payload. Our technological solution comprises two types of payload completely independent of the platform. The first payload contains a VUX-1 laser scanner (Riegl, Austria) and a Sony A6000 E-Mount photo camera. The second payload integrates a hyperspectral push-broom scanner AISA Kestrel 10 (Specim, Finland). The two payloads need to be alternated if mapping with both is required. Both payloads include an inertial navigation system xNAV550 (Oxford Technical Solutions Ltd., United Kingdom), a separate data link, and a power supply unit. Such a constellation allowed for achieving high accuracy of the flight line post-processing in two test missions. The standard deviation was 0.02 m (XY) and 0.025 m (Z), respectively. The intended application of the UAS was for high-resolution mapping and monitoring of landscape dynamics (landslides, erosion, flooding, or crops growth). The legal regulations for such UAV applications in Switzerland and Slovakia are also discussed.

  2. SCT-4800T whole body X-ray CT scanner

    International Nuclear Information System (INIS)

    Okumura, Yoshitaka; Sato, Yukio; Kuwahara, Hiroshi

    1994-01-01

    A whole body X-ray CT scanner, the SCT-4800T (trade name: INTELLECT series), has been developed. This system is the first CT scanner that is combined with general radiographic functions. The general radiographic functions include a patient couch with film casette and several tube support systems along with the CT scanner. This newly designed CT scanner also features a compact and light-weight gantry with a 700 mm diameter apperture and user-friendly operater's console. The SCT-4800T brings a new level of patient and operator comfort to the emergency radiology examination site. (author)

  3. Input Scanners: A Growing Impact In A Diverse Marketplace

    Science.gov (United States)

    Marks, Kevin E.

    1989-08-01

    Just as newly invented photographic processes revolutionized the printing industry at the turn of the century, electronic imaging has affected almost every computer application today. To completely emulate traditionally mechanical means of information handling, computer based systems must be able to capture graphic images. Thus, there is a widespread need for the electronic camera, the digitizer, the input scanner. This paper will review how various types of input scanners are being used in many diverse applications. The following topics will be covered: - Historical overview of input scanners - New applications for scanners - Impact of scanning technology on select markets - Scanning systems issues

  4. Band Subset Selection for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Chunyan Yu

    2018-01-01

    Full Text Available This paper develops a new approach to band subset selection (BSS for hyperspectral image classification (HSIC which selects multiple bands simultaneously as a band subset, referred to as simultaneous multiple band selection (SMMBS, rather than one band at a time sequentially, referred to as sequential multiple band selection (SQMBS, as most traditional band selection methods do. In doing so, a criterion is particularly developed for BSS that can be used for HSIC. It is a linearly constrained minimum variance (LCMV derived from adaptive beamforming in array signal processing which can be used to model misclassification errors as the minimum variance. To avoid an exhaustive search for all possible band subsets, two numerical algorithms, referred to as sequential (SQ and successive (SC algorithms are also developed for LCMV-based SMMBS, called SQ LCMV-BSS and SC LCMV-BSS. Experimental results demonstrate that LCMV-based BSS has advantages over SQMBS.

  5. Hyperspectral Image Classification Using Discriminative Dictionary Learning

    International Nuclear Information System (INIS)

    Zongze, Y; Hao, S; Kefeng, J; Huanxin, Z

    2014-01-01

    The hyperspectral image (HSI) processing community has witnessed a surge of papers focusing on the utilization of sparse prior for effective HSI classification. In sparse representation based HSI classification, there are two phases: sparse coding with an over-complete dictionary and classification. In this paper, we first apply a novel fisher discriminative dictionary learning method, which capture the relative difference in different classes. The competitive selection strategy ensures that atoms in the resulting over-complete dictionary are the most discriminative. Secondly, motivated by the assumption that spatially adjacent samples are statistically related and even belong to the same materials (same class), we propose a majority voting scheme incorporating contextual information to predict the category label. Experiment results show that the proposed method can effectively strengthen relative discrimination of the constructed dictionary, and incorporating with the majority voting scheme achieve generally an improved prediction performance

  6. Recent applications of hyperspectral imaging in microbiology.

    Science.gov (United States)

    Gowen, Aoife A; Feng, Yaoze; Gaston, Edurne; Valdramidis, Vasilis

    2015-05-01

    Hyperspectral chemical imaging (HSI) is a broad term encompassing spatially resolved spectral data obtained through a variety of modalities (e.g. Raman scattering, Fourier transform infrared microscopy, fluorescence and near-infrared chemical imaging). It goes beyond the capabilities of conventional imaging and spectroscopy by obtaining spatially resolved spectra from objects at spatial resolutions varying from the level of single cells up to macroscopic objects (e.g. foods). In tandem with recent developments in instrumentation and sampling protocols, applications of HSI in microbiology have increased rapidly. This article gives a brief overview of the fundamentals of HSI and a comprehensive review of applications of HSI in microbiology over the past 10 years. Technical challenges and future perspectives for these techniques are also discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  8. DNA Microarrays in Comparative Genomics and Transcriptomics

    DEFF Research Database (Denmark)

    Willenbrock, Hanni

    2007-01-01

    at identifying the exact breakpoints where DNA has been gained or lost. In this thesis, three popular methods are compared and a realistic simulation model is presented for generating artificial data with known breakpoints and known DNA copy number. By using simulated data, we obtain a realistic evaluation......During the past few years, innovations in the DNA sequencing technology has led to an explosion in available DNA sequence information. This has revolutionized biological research and promoted the development of high throughput analysis methods that can take advantage of the vast amount of sequence...... data. For this, the DNA microarray technology has gained enormous popularity due to its ability to measure the presence or the activity of thousands of genes simultaneously. Microarrays for high throughput data analyses are not limited to a few organisms but may be applied to everything from bacteria...

  9. Immobilization Techniques for Microarray: Challenges and Applications

    Directory of Open Access Journals (Sweden)

    Satish Balasaheb Nimse

    2014-11-01

    Full Text Available The highly programmable positioning of molecules (biomolecules, nanoparticles, nanobeads, nanocomposites materials on surfaces has potential applications in the fields of biosensors, biomolecular electronics, and nanodevices. However, the conventional techniques including self-assembled monolayers fail to position the molecules on the nanometer scale to produce highly organized monolayers on the surface. The present article elaborates different techniques for the immobilization of the biomolecules on the surface to produce microarrays and their diagnostic applications. The advantages and the drawbacks of various methods are compared. This article also sheds light on the applications of the different technologies for the detection and discrimination of viral/bacterial genotypes and the detection of the biomarkers. A brief survey with 115 references covering the last 10 years on the biological applications of microarrays in various fields is also provided.

  10. Mining meiosis and gametogenesis with DNA microarrays.

    Science.gov (United States)

    Schlecht, Ulrich; Primig, Michael

    2003-04-01

    Gametogenesis is a key developmental process that involves complex transcriptional regulation of numerous genes including many that are conserved between unicellular eukaryotes and mammals. Recent expression-profiling experiments using microarrays have provided insight into the co-ordinated transcription of several hundred genes during mitotic growth and meiotic development in budding and fission yeast. Furthermore, microarray-based studies have identified numerous loci that are regulated during the cell cycle or expressed in a germ-cell specific manner in eukaryotic model systems like Caenorhabditis elegans, Mus musculus as well as Homo sapiens. The unprecedented amount of information produced by post-genome biology has spawned novel approaches to organizing biological knowledge using currently available information technology. This review outlines experiments that contribute to an emerging comprehensive picture of the molecular machinery governing sexual reproduction in eukaryotes.

  11. Facilitating RNA structure prediction with microarrays.

    Science.gov (United States)

    Kierzek, Elzbieta; Kierzek, Ryszard; Turner, Douglas H; Catrina, Irina E

    2006-01-17

    Determining RNA secondary structure is important for understanding structure-function relationships and identifying potential drug targets. This paper reports the use of microarrays with heptamer 2'-O-methyl oligoribonucleotides to probe the secondary structure of an RNA and thereby improve the prediction of that secondary structure. When experimental constraints from hybridization results are added to a free-energy minimization algorithm, the prediction of the secondary structure of Escherichia coli 5S rRNA improves from 27 to 92% of the known canonical base pairs. Optimization of buffer conditions for hybridization and application of 2'-O-methyl-2-thiouridine to enhance binding and improve discrimination between AU and GU pairs are also described. The results suggest that probing RNA with oligonucleotide microarrays can facilitate determination of secondary structure.

  12. Interest point detection for hyperspectral imagery

    Science.gov (United States)

    Dorado-Muñoz, Leidy P.; Vélez-Reyes, Miguel; Roysam, Badrinath; Mukherjee, Amit

    2009-05-01

    This paper presents an algorithm for automated extraction of interest points (IPs)in multispectral and hyperspectral images. Interest points are features of the image that capture information from its neighbours and they are distinctive and stable under transformations such as translation and rotation. Interest-point operators for monochromatic images were proposed more than a decade ago and have since been studied extensively. IPs have been applied to diverse problems in computer vision, including image matching, recognition, registration, 3D reconstruction, change detection, and content-based image retrieval. Interest points are helpful in data reduction, and reduce the computational burden of various algorithms (like registration, object detection, 3D reconstruction etc) by replacing an exhaustive search over the entire image domain by a probe into a concise set of highly informative points. An interest operator seeks out points in an image that are structurally distinct, invariant to imaging conditions, stable under geometric transformation, and interpretable which are good candidates for interest points. Our approach extends ideas from Lowe's keypoint operator that uses local extrema of Difference of Gaussian (DoG) operator at multiple scales to detect interest point in gray level images. The proposed approach extends Lowe's method by direct conversion of scalar operations such as scale-space generation, and extreme point detection into operations that take the vector nature of the image into consideration. Experimental results with RGB and hyperspectral images which demonstrate the potential of the method for this application and the potential improvements of a fully vectorial approach over band-by-band approaches described in the literature.

  13. Tissue Microarray Analysis Applied to Bone Diagenesis

    OpenAIRE

    Barrios Mello, Rafael; Regis Silva, Maria Regina; Seixas Alves, Maria Teresa; Evison, Martin; Guimarães, Marco Aurélio; Francisco, Rafaella Arrabaça; Dias Astolphi, Rafael; Miazato Iwamura, Edna Sadayo

    2017-01-01

    Taphonomic processes affecting bone post mortem are important in forensic, archaeological and palaeontological investigations. In this study, the application of tissue microarray (TMA) analysis to a sample of femoral bone specimens from 20 exhumed individuals of known period of burial and age at death is described. TMA allows multiplexing of subsamples, permitting standardized comparative analysis of adjacent sections in 3-D and of representative cross-sections of a large number of specimens....

  14. Geiger mode avalanche photodiodes for microarray systems

    Science.gov (United States)

    Phelan, Don; Jackson, Carl; Redfern, R. Michael; Morrison, Alan P.; Mathewson, Alan

    2002-06-01

    New Geiger Mode Avalanche Photodiodes (GM-APD) have been designed and characterized specifically for use in microarray systems. Critical parameters such as excess reverse bias voltage, hold-off time and optimum operating temperature have been experimentally determined for these photon-counting devices. The photon detection probability, dark count rate and afterpulsing probability have been measured under different operating conditions. An active- quench circuit (AQC) is presented for operating these GM- APDs. This circuit is relatively simple, robust and has such benefits as reducing average power dissipation and afterpulsing. Arrays of these GM-APDs have already been designed and together with AQCs open up the possibility of having a solid-state microarray detector that enables parallel analysis on a single chip. Another advantage of these GM-APDs over current technology is their low voltage CMOS compatibility which could allow for the fabrication of an AQC on the same device. Small are detectors have already been employed in the time-resolved detection of fluorescence from labeled proteins. It is envisaged that operating these new GM-APDs with this active-quench circuit will have numerous applications for the detection of fluorescence in microarray systems.

  15. A High-Throughput, Precipitating Colorimetric Sandwich ELISA Microarray for Shiga Toxins

    Directory of Open Access Journals (Sweden)

    Andrew Gehring

    2014-06-01

    Full Text Available Shiga toxins 1 and 2 (Stx1 and Stx2 from Shiga toxin-producing E. coli (STEC bacteria were simultaneously detected with a newly developed, high-throughput antibody microarray platform. The proteinaceous toxins were immobilized and sandwiched between biorecognition elements (monoclonal antibodies and pooled horseradish peroxidase (HRP-conjugated monoclonal antibodies. Following the reaction of HRP with the precipitating chromogenic substrate (metal enhanced 3,3-diaminobenzidine tetrahydrochloride or DAB, the formation of a colored product was quantitatively measured with an inexpensive flatbed page scanner. The colorimetric ELISA microarray was demonstrated to detect Stx1 and Stx2 at levels as low as ~4.5 ng/mL within ~2 h of total assay time with a narrow linear dynamic range of ~1–2 orders of magnitude and saturation levels well above background. Stx1 and/or Stx2 produced by various strains of STEC were also detected following the treatment of cultured cells with mitomycin C (a toxin-inducing antibiotic and/or B-PER (a cell-disrupting, protein extraction reagent. Semi-quantitative detection of Shiga toxins was demonstrated to be sporadic among various STEC strains following incubation with mitomycin C; however, further reaction with B-PER generally resulted in the detection of or increased detection of Stx1, relative to Stx2, produced by STECs inoculated into either axenic broth culture or culture broth containing ground beef.

  16. Antibody Microarray for E. coli O157:H7 and Shiga Toxin in Microtiter Plates

    Directory of Open Access Journals (Sweden)

    Andrew G. Gehring

    2015-12-01

    Full Text Available Antibody microarray is a powerful analytical technique because of its inherent ability to simultaneously discriminate and measure numerous analytes, therefore making the technique conducive to both the multiplexed detection and identification of bacterial analytes (i.e., whole cells, as well as associated metabolites and/or toxins. We developed a sandwich fluorescent immunoassay combined with a high-throughput, multiwell plate microarray detection format. Inexpensive polystyrene plates were employed containing passively adsorbed, array-printed capture antibodies. During sample reaction, centrifugation was the only strategy found to significantly improve capture, and hence detection, of bacteria (pathogenic Escherichia coli O157:H7 to planar capture surfaces containing printed antibodies. Whereas several other sample incubation techniques (e.g., static vs. agitation had minimal effect. Immobilized bacteria were labeled with a red-orange-fluorescent dye (Alexa Fluor 555 conjugated antibody to allow for quantitative detection of the captured bacteria with a laser scanner. Shiga toxin 1 (Stx1 could be simultaneously detected along with the cells, but none of the agitation techniques employed during incubation improved detection of the relatively small biomolecule. Under optimal conditions, the assay had demonstrated limits of detection of ~5.8 × 105 cells/mL and 110 ng/mL for E. coli O157:H7 and Stx1, respectively, in a ~75 min total assay time.

  17. Antibody Microarray for E. coli O157:H7 and Shiga Toxin in Microtiter Plates.

    Science.gov (United States)

    Gehring, Andrew G; Brewster, Jeffrey D; He, Yiping; Irwin, Peter L; Paoli, George C; Simons, Tawana; Tu, Shu-I; Uknalis, Joseph

    2015-12-04

    Antibody microarray is a powerful analytical technique because of its inherent ability to simultaneously discriminate and measure numerous analytes, therefore making the technique conducive to both the multiplexed detection and identification of bacterial analytes (i.e., whole cells, as well as associated metabolites and/or toxins). We developed a sandwich fluorescent immunoassay combined with a high-throughput, multiwell plate microarray detection format. Inexpensive polystyrene plates were employed containing passively adsorbed, array-printed capture antibodies. During sample reaction, centrifugation was the only strategy found to significantly improve capture, and hence detection, of bacteria (pathogenic Escherichia coli O157:H7) to planar capture surfaces containing printed antibodies. Whereas several other sample incubation techniques (e.g., static vs. agitation) had minimal effect. Immobilized bacteria were labeled with a red-orange-fluorescent dye (Alexa Fluor 555) conjugated antibody to allow for quantitative detection of the captured bacteria with a laser scanner. Shiga toxin 1 (Stx1) could be simultaneously detected along with the cells, but none of the agitation techniques employed during incubation improved detection of the relatively small biomolecule. Under optimal conditions, the assay had demonstrated limits of detection of ~5.8 × 10⁵ cells/mL and 110 ng/mL for E. coli O157:H7 and Stx1, respectively, in a ~75 min total assay time.

  18. Scanner OPC signatures: automatic vendor-to-vendor OPE matching

    Science.gov (United States)

    Renwick, Stephen P.

    2009-03-01

    As 193nm lithography continues to be stretched and the k1 factor decreases, optical proximity correction (OPC) has become a vital part of the lithographer's tool kit. Unfortunately, as is now well known, the design variations of lithographic scanners from different vendors cause them to have slightly different optical-proximity effect (OPE) behavior, meaning that they print features through pitch in distinct ways. This in turn means that their response to OPC is not the same, and that an OPC solution designed for a scanner from Company 1 may or may not work properly on a scanner from Company 2. Since OPC is not inexpensive, that causes trouble for chipmakers using more than one brand of scanner. Clearly a scanner-matching procedure is needed to meet this challenge. Previously, automatic matching has only been reported for scanners of different tool generations from the same manufacturer. In contrast, scanners from different companies have been matched using expert tuning and adjustment techniques, frequently requiring laborious test exposures. Automatic matching between scanners from Company 1 and Company 2 has remained an unsettled problem. We have recently solved this problem and introduce a novel method to perform the automatic matching. The success in meeting this challenge required three enabling factors. First, we recognized the strongest drivers of OPE mismatch and are thereby able to reduce the information needed about a tool from another supplier to that information readily available from all modern scanners. Second, we developed a means of reliably identifying the scanners' optical signatures, minimizing dependence on process parameters that can cloud the issue. Third, we carefully employed standard statistical techniques, checking for robustness of the algorithms used and maximizing efficiency. The result is an automatic software system that can predict an OPC matching solution for scanners from different suppliers without requiring expert intervention.

  19. Classification across gene expression microarray studies

    Directory of Open Access Journals (Sweden)

    Kuner Ruprecht

    2009-12-01

    Full Text Available Abstract Background The increasing number of gene expression microarray studies represents an important resource in biomedical research. As a result, gene expression based diagnosis has entered clinical practice for patient stratification in breast cancer. However, the integration and combined analysis of microarray studies remains still a challenge. We assessed the potential benefit of data integration on the classification accuracy and systematically evaluated the generalization performance of selected methods on four breast cancer studies comprising almost 1000 independent samples. To this end, we introduced an evaluation framework which aims to establish good statistical practice and a graphical way to monitor differences. The classification goal was to correctly predict estrogen receptor status (negative/positive and histological grade (low/high of each tumor sample in an independent study which was not used for the training. For the classification we chose support vector machines (SVM, predictive analysis of microarrays (PAM, random forest (RF and k-top scoring pairs (kTSP. Guided by considerations relevant for classification across studies we developed a generalization of kTSP which we evaluated in addition. Our derived version (DV aims to improve the robustness of the intrinsic invariance of kTSP with respect to technologies and preprocessing. Results For each individual study the generalization error was benchmarked via complete cross-validation and was found to be similar for all classification methods. The misclassification rates were substantially higher in classification across studies, when each single study was used as an independent test set while all remaining studies were combined for the training of the classifier. However, with increasing number of independent microarray studies used in the training, the overall classification performance improved. DV performed better than the average and showed slightly less variance. In

  20. Computerized tomographic scanner with shaped radiation filter

    International Nuclear Information System (INIS)

    Carlson, R.W.; Walters, R.G.

    1981-01-01

    The invention comprises a shaped filter and a filter correction circuitry for computerized tomographic scanners. The shaped filter is a generally u-shaped block of filter material which is adapted to be mounted between the source of radiation and the scan circle. The u-shaped block has a parabolic recess. The filter material may be beryllium, aluminum, sulphur, calcium, titanium, erbium, copper, and compounds including oxides and alloys thereof. The filter correction circuit comprises a first filter correction profile adding circuit for adding a first scaler valve to each intensity valve in a data line. The data line is operated on by a beam hardness correction polynomial. After the beam hardness polynomial correction operation, a second filter correction circuit adds a second filter correction profile consisting of a table of scalor values, one corresponding to each intensity reading in the data line

  1. Development of scintillation materials for PET scanners

    CERN Document Server

    Korzhik, Mikhail; Annenkov, Alexander N; Borissevitch, Andrei; Dossovitski, Alexei; Missevitch, Oleg; Lecoq, Paul

    2007-01-01

    The growing demand on PET methodology for a variety of applications ranging from clinical use to fundamental studies triggers research and development of PET scanners providing better spatial resolution and sensitivity. These efforts are primarily focused on the development of advanced PET detector solutions and on the developments of new scintillation materials as well. However Lu containing scintillation materials introduced in the last century such as LSO, LYSO, LuAP, LuYAP crystals still remain the best PET species in spite of the recent developments of bright, fast but relatively low density lanthanum bromide scintillators. At the same time Lu based materials have several drawbacks which are high temperature of crystallization and relatively high cost compared to alkali-halide scintillation materials. Here we describe recent results in the development of new scintillation materials for PET application.

  2. Compact beamforming in medical ultrasound scanners

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev

    2003-01-01

    for high-quality imaging is large, and compressing it leads to better compactness of the beamformers. The existing methods for compressing and recursive generation of focusing data, along with original work in the area, are presented in Chapter 4. The principles and the performance limitations...... quality is comparable to that of the very good scanners currently on the market. The performance results have been achieved with the use of a simple oversampled converter of second order. The use of a higher order oversampled converter will allow higher pulse frequency to be used while the high dynamic...... channels, and even more channels are necessary for 3-dimensional (3D) diagnostic imaging. On the other hand, there is a demand for inexpensive portable devices for use outside hospitals, in field conditions, where power consumption and compactness are important factors. The thesis starts...

  3. Upgraded airborne scanner for commercial remote sensing

    Science.gov (United States)

    Chang, Sheng-Huei; Rubin, Tod D.

    1994-06-01

    Traditional commercial remote sensing has focused on the geologic market, with primary focus on mineral identification and mapping in the visible through short-wave infrared spectral regions (0.4 to 2.4 microns). Commercial remote sensing users now demand airborne scanning capabilities spanning the entire wavelength range from ultraviolet through thermal infrared (0.3 to 12 microns). This spectral range enables detection, identification, and mapping of objects and liquids on the earth's surface and gases in the air. Applications requiring this range of wavelengths include detection and mapping of oil spills, soil and water contamination, stressed vegetation, and renewable and non-renewable natural resources, and also change detection, natural hazard mitigation, emergency response, agricultural management, and urban planning. GER has designed and built a configurable scanner that acquires high resolution images in 63 selected wave bands in this broad wavelength range.

  4. A megavoltage CT scanner for radiotherapy verification

    International Nuclear Information System (INIS)

    Lewis, D.G.; Swindell, W.; Morton, E.J.; Evans, P.M.; Xiao, Z.R.

    1992-01-01

    The authors have developed a system for generating megavoltage CT images immediately prior to the administration of external beam radiotherapy. The detector is based on the scanner of Simpson (Simpson et al 1982) - the major differences being a significant reduction in dose required for image formation, faster image formation and greater convenience of use in the clinical setting. Attention has been paid to the problem of ring artefacts in the images. Specifically, a Fourier-space filter has been applied to the sinogram data. After suitable detector calibration, it has been shown that the device operates close to its theoretical specification of 3 mm spatial resolution and a few percent contrast resolution. Ring artefacts continue to be a major source of image degradation. A number of clinical images are presented. (author)

  5. A clinical molecular scanner: the Melanie project.

    Science.gov (United States)

    Hochstrasser, D F; Appel, R D; Vargas, R; Perrier, R; Vurlod, J F; Ravier, F; Pasquali, C; Funk, M; Pellegrini, C; Muller, A F

    1991-01-01

    We developed an expert system to analyze and interpret protein maps. This system, Melanie (medical electrophoresis analysis interactive expert), can distinguish between normal and cirrhotic liver and identify various types of cancer on the basis of protein patterns in biopsy specimens. Our findings suggest that some diseases associated with toxic compounds or modifications of the human genome can be diagnosed by expert systems that analyze protein maps. The combination of protein mapping and computer analysis could result in a clinically useful "molecular scanner". The massive amount of information analyzed and stored in such studies requires new strategies, including centralized databases and image transmission over networks. Increased understanding of protein expression and regulation will enhance the importance of the human genome project in medicine and biology.

  6. Evaluation of toxicity of the mycotoxin citrinin using yeast ORF DNA microarray and Oligo DNA microarray

    Directory of Open Access Journals (Sweden)

    Nobumasa Hitoshi

    2007-04-01

    Full Text Available Abstract Background Mycotoxins are fungal secondary metabolites commonly present in feed and food, and are widely regarded as hazardous contaminants. Citrinin, one of the very well known mycotoxins that was first isolated from Penicillium citrinum, is produced by more than 10 kinds of fungi, and is possibly spread all over the world. However, the information on the action mechanism of the toxin is limited. Thus, we investigated the citrinin-induced genomic response for evaluating its toxicity. Results Citrinin inhibited growth of yeast cells at a concentration higher than 100 ppm. We monitored the citrinin-induced mRNA expression profiles in yeast using the ORF DNA microarray and Oligo DNA microarray, and the expression profiles were compared with those of the other stress-inducing agents. Results obtained from both microarray experiments clustered together, but were different from those of the mycotoxin patulin. The oxidative stress response genes – AADs, FLR1, OYE3, GRE2, and MET17 – were significantly induced. In the functional category, expression of genes involved in "metabolism", "cell rescue, defense and virulence", and "energy" were significantly activated. In the category of "metabolism", genes involved in the glutathione synthesis pathway were activated, and in the category of "cell rescue, defense and virulence", the ABC transporter genes were induced. To alleviate the induced stress, these cells might pump out the citrinin after modification with glutathione. While, the citrinin treatment did not induce the genes involved in the DNA repair. Conclusion Results from both microarray studies suggest that citrinin treatment induced oxidative stress in yeast cells. The genotoxicity was less severe than the patulin, suggesting that citrinin is less toxic than patulin. The reproducibility of the expression profiles was much better with the Oligo DNA microarray. However, the Oligo DNA microarray did not completely overcome cross

  7. Verification of a CT scanner using a miniature step gauge

    DEFF Research Database (Denmark)

    Cantatore, Angela; Andreasen, J.L.; Carmignato, S.

    2011-01-01

    The work deals with performance verification of a CT scanner using a 42mm miniature replica step gauge developed for optical scanner verification. Errors quantification and optimization of CT system set-up in terms of resolution and measurement accuracy are fundamental for use of CT scanning...

  8. Quantitative Assay for Starch by Colorimetry Using a Desktop Scanner

    Science.gov (United States)

    Matthews, Kurt R.; Landmark, James D.; Stickle, Douglas F.

    2004-01-01

    The procedure to produce standard curve for starch concentration measurement by image analysis using a color scanner and computer for data acquisition and color analysis is described. Color analysis is performed by a Visual Basic program that measures red, green, and blue (RGB) color intensities for pixels within the scanner image.

  9. Vision Assisted Laser Scanner Navigation for Autonomous Robots

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2008-01-01

    This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot ...

  10. Radiation dosimetry of computed tomography x-ray scanners

    International Nuclear Information System (INIS)

    Poletti, J.L.; Williamson, B.D.P.; Le Heron, J.C.

    1983-01-01

    This report describes the development and application of the methods employed in National Radiation Laboratory (NRL) surveys of computed tomography x-ray scanners (CT scanners). It includes descriptions of the phantoms and equipment used, discussion of the various dose parameters measured, the principles of the various dosimetry systems employed and some indication of the doses to occupationally exposed personnel

  11. INFORMATION EXTRACTION IN TOMB PIT USING HYPERSPECTRAL DATA

    Directory of Open Access Journals (Sweden)

    X. Yang

    2018-04-01

    Full Text Available Hyperspectral data has characteristics of multiple bands and continuous, large amount of data, redundancy, and non-destructive. These characteristics make it possible to use hyperspectral data to study cultural relics. In this paper, the hyperspectral imaging technology is adopted to recognize the bottom images of an ancient tomb located in Shanxi province. There are many black remains on the bottom surface of the tomb, which are suspected to be some meaningful texts or paintings. Firstly, the hyperspectral data is preprocessing to get the reflectance of the region of interesting. For the convenient of compute and storage, the original reflectance value is multiplied by 10000. Secondly, this article uses three methods to extract the symbols at the bottom of the ancient tomb. Finally we tried to use morphology to connect the symbols and gave fifteen reference images. The results show that the extraction of information based on hyperspectral data can obtain a better visual experience, which is beneficial to the study of ancient tombs by researchers, and provides some references for archaeological research findings.

  12. Information Extraction in Tomb Pit Using Hyperspectral Data

    Science.gov (United States)

    Yang, X.; Hou, M.; Lyu, S.; Ma, S.; Gao, Z.; Bai, S.; Gu, M.; Liu, Y.

    2018-04-01

    Hyperspectral data has characteristics of multiple bands and continuous, large amount of data, redundancy, and non-destructive. These characteristics make it possible to use hyperspectral data to study cultural relics. In this paper, the hyperspectral imaging technology is adopted to recognize the bottom images of an ancient tomb located in Shanxi province. There are many black remains on the bottom surface of the tomb, which are suspected to be some meaningful texts or paintings. Firstly, the hyperspectral data is preprocessing to get the reflectance of the region of interesting. For the convenient of compute and storage, the original reflectance value is multiplied by 10000. Secondly, this article uses three methods to extract the symbols at the bottom of the ancient tomb. Finally we tried to use morphology to connect the symbols and gave fifteen reference images. The results show that the extraction of information based on hyperspectral data can obtain a better visual experience, which is beneficial to the study of ancient tombs by researchers, and provides some references for archaeological research findings.

  13. Secure and Efficient Transmission of Hyperspectral Images for Geosciences Applications

    Science.gov (United States)

    Carpentieri, Bruno; Pizzolante, Raffaele

    2017-12-01

    Hyperspectral images are acquired through air-borne or space-borne special cameras (sensors) that collect information coming from the electromagnetic spectrum of the observed terrains. Hyperspectral remote sensing and hyperspectral images are used for a wide range of purposes: originally, they were developed for mining applications and for geology because of the capability of this kind of images to correctly identify various types of underground minerals by analysing the reflected spectrums, but their usage has spread in other application fields, such as ecology, military and surveillance, historical research and even archaeology. The large amount of data obtained by the hyperspectral sensors, the fact that these images are acquired at a high cost by air-borne sensors and that they are generally transmitted to a base, makes it necessary to provide an efficient and secure transmission protocol. In this paper, we propose a novel framework that allows secure and efficient transmission of hyperspectral images, by combining a reversible invisible watermarking scheme, used in conjunction with digital signature techniques, and a state-of-art predictive-based lossless compression algorithm.

  14. Computer Tomography Scanners in Portugal (1990-2011

    Directory of Open Access Journals (Sweden)

    Ricardo Crispim

    2014-06-01

    Full Text Available The use of Computed Tomography (CT has increased every year since its introduction into medicine in 1972. Technological developments have made CT one of the most important imaging modalities in modern medicine. This importance is evidenced in the increasing demand and number of CT scanners installed in Portugal and worldwide. This review compiles the most recent national statistics from official publications on the number of CT scanners installed in Portugal and compares them with data available in international publications. We conclude that the number of CT scanners installed in Portugal exceeded the EU27 average by 61.5 % and the OECD average by 78.2 %, and that in 2011 there were 203 CT scanners installed in hospitals in Portugal, which equated to 19.23 CT scanners per million inhabitants.

  15. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  16. Design of a covalently bonded glycosphingolipid microarray

    DEFF Research Database (Denmark)

    Arigi, Emma; Blixt, Klas Ola; Buschard, Karsten

    2012-01-01

    , the major classes of plant and fungal GSLs. In this work, a prototype "universal" GSL-based covalent microarray has been designed, and preliminary evaluation of its potential utility in assaying protein-GSL binding interactions investigated. An essential step in development involved the enzymatic release...... of the fatty acyl moiety of the ceramide aglycone of selected mammalian GSLs with sphingolipid N-deacylase (SCDase). Derivatization of the free amino group of a typical lyso-GSL, lyso-G(M1), with a prototype linker assembled from succinimidyl-[(N-maleimidopropionamido)-diethyleneglycol] ester and 2...

  17. Linking probe thermodynamics to microarray quantification

    International Nuclear Information System (INIS)

    Li, Shuzhao; Pozhitkov, Alexander; Brouwer, Marius

    2010-01-01

    Understanding the difference in probe properties holds the key to absolute quantification of DNA microarrays. So far, Langmuir-like models have failed to link sequence-specific properties to hybridization signals in the presence of a complex hybridization background. Data from washing experiments indicate that the post-hybridization washing has no major effect on the specifically bound targets, which give the final signals. Thus, the amount of specific targets bound to probes is likely determined before washing, by the competition against nonspecific binding. Our competitive hybridization model is a viable alternative to Langmuir-like models. (comment)

  18. Real-time progressive hyperspectral image processing endmember finding and anomaly detection

    CERN Document Server

    Chang, Chein-I

    2016-01-01

    The book covers the most crucial parts of real-time hyperspectral image processing: causality and real-time capability. Recently, two new concepts of real time hyperspectral image processing, Progressive Hyperspectral Imaging (PHSI) and Recursive Hyperspectral Imaging (RHSI). Both of these can be used to design algorithms and also form an integral part of real time hyperpsectral image processing. This book focuses on progressive nature in algorithms on their real-time and causal processing implementation in two major applications, endmember finding and anomaly detection, both of which are fundamental tasks in hyperspectral imaging but generally not encountered in multispectral imaging. This book is written to particularly address PHSI in real time processing, while a book, Recursive Hyperspectral Sample and Band Processing: Algorithm Architecture and Implementation (Springer 2016) can be considered as its companion book. Includes preliminary background which is essential to those who work in hyperspectral ima...

  19. Comparison of Epson scanner quality for radiochromic film evaluation.

    Science.gov (United States)

    Alnawaf, Hani; Yu, Peter K N; Butson, Martin

    2012-09-06

    Epson Desktop scanners have been quoted as devices which match the characteristics required for the evaluation of radiation dose exposure by radiochromic films. Specifically, models such as the 10000XL have been used successfully for image analysis and are recommended by ISP for dosimetry purposes. This note investigates and compares the scanner characteristics of three Epson desktop scanner models including the Epson 10000XL, V700, and V330. Both of the latter are substantially cheaper models capable of A4 scanning. As the price variation between the V330 and the 10000XL is 20-fold (based on Australian recommended retail price), cost savings by using the cheaper scanners may be warranted based on results. By a direct comparison of scanner uniformity and reproducibility we can evaluate the accuracy of these scanners for radiochromic film dosimetry. Results have shown that all three scanners can produce adequate scanner uniformity and reproducibility, with the inexpensive V330 producing a standard deviation variation across its landscape direction of 0.7% and 1.2% in the portrait direction (reflection mode). This is compared to the V700 in reflection mode of 0.25% and 0.5% for landscape and portrait directions, respectively, and 0.5% and 0.8% for the 10000XL. In transmission mode, the V700 is comparable in reproducibility to the 10000XL for portrait and landscape mode, whilst the V330 is only capable of scanning in the landscape direction and produces a standard deviation in this direction of 1.0% compared to 0.6% (V700) and 0.25% (10000XL). Results have shown that the V700 and 10000XL are comparable scanners in quality and accuracy with the 10000XL obviously capable of imaging over an A3 area as opposed to an A4 area for the V700. The V330 scanner produced slightly lower accuracy and quality with uncertainties approximately twice as much as the other scanners. However, the results show that the V330 is still an adequate scanner and could be used for radiation

  20. A Novel Measurement Matrix Optimization Approach for Hyperspectral Unmixing

    Directory of Open Access Journals (Sweden)

    Su Xu

    2017-01-01

    Full Text Available Each pixel in the hyperspectral unmixing process is modeled as a linear combination of endmembers, which can be expressed in the form of linear combinations of a number of pure spectral signatures that are known in advance. However, the limitation of Gaussian random variables on its computational complexity or sparsity affects the efficiency and accuracy. This paper proposes a novel approach for the optimization of measurement matrix in compressive sensing (CS theory for hyperspectral unmixing. Firstly, a new Toeplitz-structured chaotic measurement matrix (TSCMM is formed by pseudo-random chaotic elements, which can be implemented by a simple hardware; secondly, rank revealing QR factorization with eigenvalue decomposition is presented to speed up the measurement time; finally, orthogonal gradient descent method for measurement matrix optimization is used to achieve optimal incoherence. Experimental results demonstrate that the proposed approach can lead to better CS reconstruction performance with low extra computational cost in hyperspectral unmixing.

  1. Parallel computation for blood cell classification in medical hyperspectral imagery

    International Nuclear Information System (INIS)

    Li, Wei; Wu, Lucheng; Qiu, Xianbo; Ran, Qiong; Xie, Xiaoming

    2016-01-01

    With the advantage of fine spectral resolution, hyperspectral imagery provides great potential for cell classification. This paper provides a promising classification system including the following three stages: (1) band selection for a subset of spectral bands with distinctive and informative features, (2) spectral-spatial feature extraction, such as local binary patterns (LBP), and (3) followed by an effective classifier. Moreover, these three steps are further implemented on graphics processing units (GPU) respectively, which makes the system real-time and more practical. The GPU parallel implementation is compared with the serial implementation on central processing units (CPU). Experimental results based on real medical hyperspectral data demonstrate that the proposed system is able to offer high accuracy and fast speed, which are appealing for cell classification in medical hyperspectral imagery. (paper)

  2. A system design for storing, archiving, and retrieving hyperspectral data

    Science.gov (United States)

    Dedecker, Ralph G.; Whittaker, Tom; Garcia, Raymond K.; Knuteson, Robert O.

    2004-10-01

    Hyperspectral data and products derived from instrumentation such as the Atmospheric Infrared Sounder (AIRS), the Cross-track Infrared Sounder (CrIS), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and the GOES-R Hyperspectral Environmental Suite (HES) will impose storage and data retrieval requirements that far exceed the demands of earlier generation remote sensing instrumentation used for atmospheric science research. A new architecture designed to address projected real time and research needs is undergoing prototype design and development. The system is designed using proven aspects of distributed data storage networks, descriptive metadata associated with stored files, data cataloging and database search schemes, and a data delivery approach that obeys accepted standards. Preliminary implementation and testing of some components of this architecture indicate that the design approach shows promise of an improved method for storage and library functionality for the data volumes associated with operational hyperspectral instrumentation.

  3. High-resolution hyperspectral ground mapping for robotic vision

    Science.gov (United States)

    Neuhaus, Frank; Fuchs, Christian; Paulus, Dietrich

    2018-04-01

    Recently released hyperspectral cameras use large, mosaiced filter patterns to capture different ranges of the light's spectrum in each of the camera's pixels. Spectral information is sparse, as it is not fully available in each location. We propose an online method that avoids explicit demosaicing of camera images by fusing raw, unprocessed, hyperspectral camera frames inside an ego-centric ground surface map. It is represented as a multilayer heightmap data structure, whose geometry is estimated by combining a visual odometry system with either dense 3D reconstruction or 3D laser data. We use a publicly available dataset to show that our approach is capable of constructing an accurate hyperspectral representation of the surface surrounding the vehicle. We show that in many cases our approach increases spatial resolution over a demosaicing approach, while providing the same amount of spectral information.

  4. Spatial and temporal variability of hyperspectral signatures of terrain

    Science.gov (United States)

    Jones, K. F.; Perovich, D. K.; Koenig, G. G.

    2008-04-01

    Electromagnetic signatures of terrain exhibit significant spatial heterogeneity on a range of scales as well as considerable temporal variability. A statistical characterization of the spatial heterogeneity and spatial scaling algorithms of terrain electromagnetic signatures are required to extrapolate measurements to larger scales. Basic terrain elements including bare soil, grass, deciduous, and coniferous trees were studied in a quasi-laboratory setting using instrumented test sites in Hanover, NH and Yuma, AZ. Observations were made using a visible and near infrared spectroradiometer (350 - 2500 nm) and hyperspectral camera (400 - 1100 nm). Results are reported illustrating: i) several difference scenes; ii) a terrain scene time series sampled over an annual cycle; and iii) the detection of artifacts in scenes. A principal component analysis indicated that the first three principal components typically explained between 90 and 99% of the variance of the 30 to 40-channel hyperspectral images. Higher order principal components of hyperspectral images are useful for detecting artifacts in scenes.

  5. Hyperspectral Cubesat Constellation for Rapid Natural Hazard Response

    Science.gov (United States)

    Mandl, D.; Huemmrich, K. F.; Ly, V. T.; Handy, M.; Ong, L.; Crum, G.

    2015-12-01

    With the advent of high performance space networks that provide total coverage for Cubesats, the paradigm for low cost, high temporal coverage with hyperspectral instruments becomes more feasible. The combination of ground cloud computing resources, high performance with low power consumption onboard processing, total coverage for the cubesats and social media provide an opprotunity for an architecture that provides cost-effective hyperspectral data products for natural hazard response and decision support. This paper provides a series of pathfinder efforts to create a scalable Intelligent Payload Module(IPM) that has flown on a variety of airborne vehicles including Cessna airplanes, Citation jets and a helicopter and will fly on an Unmanned Aerial System (UAS) hexacopter to monitor natural phenomena. The IPM's developed thus far were developed on platforms that emulate a satellite environment which use real satellite flight software, real ground software. In addition, science processing software has been developed that perform hyperspectral processing onboard using various parallel processing techniques to enable creation of onboard hyperspectral data products while consuming low power. A cubesat design was developed that is low cost and that is scalable to larger consteallations and thus can provide daily hyperspectral observations for any spot on earth. The design was based on the existing IPM prototypes and metrics that were developed over the past few years and a shrunken IPM that can perform up to 800 Mbps throughput. Thus this constellation of hyperspectral cubesats could be constantly monitoring spectra with spectral angle mappers after Level 0, Level 1 Radiometric Correction, Atmospheric Correction processing. This provides the opportunity daily monitoring of any spot on earth on a daily basis at 30 meter resolution which is not available today.

  6. Design of an Enterobacteriaceae Pan-genome Microarray Chip

    DEFF Research Database (Denmark)

    Lukjancenko, Oksana; Ussery, David

    2010-01-01

    -density microarray chip has been designed, using 116 Enterobacteriaceae genome sequences, taking into account the enteric pan-genome. Probes for the microarray were checked in silico and performance of the chip, based on experimental strains from four different genera, demonstrate a relatively high ability...... to distinguish those strains on genus, species, and pathotype/serovar levels. Additionally, the microarray performed well when investigating which genes were found in a given strain of interest. The Enterobacteriaceae pan-genome microarray, based on 116 genomes, provides a valuable tool for determination...

  7. Whole-body 35-GHz security scanner

    Science.gov (United States)

    Appleby, Roger; Anderton, Rupert N.; Price, Sean; Sinclair, Gordon N.; Coward, Peter R.

    2004-08-01

    A 35GHz imager designed for Security Scanning has been previously demonstrated. That imager was based on a folded conical scan technology and was constructed from low cost materials such as expanded polystyrene and printed circuit board. In conjunction with an illumination chamber it was used to collect indoor imagery of people with weapons and contraband hidden under their clothing. That imager had a spot size of 20mm and covered a field of view of 20 x 10 degrees that partially covered the body of an adult from knees to shoulders. A new variant of this imager has been designed and constructed. It has a field of view of 36 x 18 degrees and is capable of covering the whole body of an adult. This was achieved by increasing the number of direct detection receivers from the 32 used in the previous design to 58, and by implementing an improved optical design. The optics consist of a front grid, a polarisation device which converts linear to circular polarisation and a rotating scanner. This new design uses high-density expanded polystyrene as a correcting element on the back of the front grid. This gives an added degree of freedom that allows the optical design to be diffraction limited over a very wide field of view. Obscuration by the receivers and associated components is minimised by integrating the post detection electronics at the receiver array.

  8. Focal plane scanner with reciprocating spatial window

    Science.gov (United States)

    Mao, Chengye (Inventor)

    2000-01-01

    A focal plane scanner having a front objective lens, a spatial window for selectively passing a portion of the image therethrough, and a CCD array for receiving the passed portion of the image. All embodiments have a common feature whereby the spatial window and CCD array are mounted for simultaneous relative reciprocating movement with respect to the front objective lens, and the spatial window is mounted within the focal plane of the front objective. In a first embodiment, the spatial window is a slit and the CCD array is one-dimensional, and successive rows of the image in the focal plane of the front objective lens are passed to the CCD array by an image relay lens interposed between the slit and the CCD array. In a second embodiment, the spatial window is a slit, the CCD array is two-dimensional, and a prism-grating-prism optical spectrometer is interposed between the slit and the CCD array so as to cause the scanned row to be split into a plurality of spectral separations onto the CCD array. In a third embodiment, the CCD array is two-dimensional and the spatial window is a rectangular linear variable filter (LVF) window, so as to cause the scanned rows impinging on the LVF to be bandpass filtered into spectral components onto the CCD array through an image relay lens interposed between the LVF and the CCD array.

  9. Fast and High Accuracy Wire Scanner

    CERN Document Server

    Koujili, M; Koopman, J; Ramos, D; Sapinski, M; De Freitas, J; Ait Amira, Y; Djerdir, A

    2009-01-01

    Scanning of a high intensity particle beam imposes challenging requirements on a Wire Scanner system. It is expected to reach a scanning speed of 20 m.s-1 with a position accuracy of the order of 1 μm. In addition a timing accuracy better than 1 millisecond is needed. The adopted solution consists of a fork holding a wire rotating by a maximum of 200°. Fork, rotor and angular position sensor are mounted on the same axis and located in a chamber connected to the beam vacuum. The requirements imply the design of a system with extremely low vibration, vacuum compatibility, radiation and temperature tolerance. The adopted solution consists of a rotary brushless synchronous motor with the permanent magnet rotor installed inside of the vacuum chamber and the stator installed outside. The accurate position sensor will be mounted on the rotary shaft inside of the vacuum chamber, has to resist a bake-out temperature of 200°C and ionizing radiation up to a dozen of kGy/year. A digital feedback controller allows maxi...

  10. APLIKASI INFO HALAL MENGGUNAKAN BARCODE SCANNER UNTUK SMARTPHONE ANDROID

    Directory of Open Access Journals (Sweden)

    Beki Subeki

    2016-05-01

    Full Text Available Abstract – In the production and trade of food products in the era of globalization, people are consuming, especially Muslims need to be given the knowledge, information and access to adequate in order to obtain the correct information about the halal status of products bought. The use of barcode scanners halal product information using the mobile platform is effective and useful for the public to find out information on a product. Barcode scanners can be read by optical scanners called barcode readers or scanned from an image by special software. In Indonesia, most mobile phones have the scanning software for 2D codes, and similar devices available via smartphone.   Keywords : Barcode Scanner, Mobile Platform, Halal Products, Smartphone     Abstrak - Dalam kegiatan produksi dan perdagangan produk pangan di era globalisasi ini, masyarakat yang mengkonsumsi, khususnya umat islam perlu diberikan pengetahuan tentang kehalalan produk, informasi dan akses yang memadai agar memperoleh informasi yang benar tentang status kehalalan produk yang dibelinya. Penggunaan barcode scanner informasi produk halal dengan menggunakan mobile platform dinilai cukup efektif dan berguna bagi masyarakat luas untuk mengetahui informasi sebuah produk. Barcode scanner dapat dibaca oleh pemindai optik yang disebut pembaca kode batang atau dipindai dari sebuah gambar oleh perangkat lunak khusus. Di Indonesia, kebanyakan telepon genggam memiliki perangkat lunak pemindai untuk kode 2D, dan perangkat sejenis tersedia melalui smartphone.   Kata Kunci: Barcode Scanner, Mobile Platform, Produk Halal, Smartphone

  11. An Unsupervised Deep Hyperspectral Anomaly Detector

    Directory of Open Access Journals (Sweden)

    Ning Ma

    2018-02-01

    Full Text Available Hyperspectral image (HSI based detection has attracted considerable attention recently in agriculture, environmental protection and military applications as different wavelengths of light can be advantageously used to discriminate different types of objects. Unfortunately, estimating the background distribution and the detection of interesting local objects is not straightforward, and anomaly detectors may give false alarms. In this paper, a Deep Belief Network (DBN based anomaly detector is proposed. The high-level features and reconstruction errors are learned through the network in a manner which is not affected by previous background distribution assumption. To reduce contamination by local anomalies, adaptive weights are constructed from reconstruction errors and statistical information. By using the code image which is generated during the inference of DBN and modified by adaptively updated weights, a local Euclidean distance between under test pixels and their neighboring pixels is used to determine the anomaly targets. Experimental results on synthetic and recorded HSI datasets show the performance of proposed method outperforms the classic global Reed-Xiaoli detector (RXD, local RX detector (LRXD and the-state-of-the-art Collaborative Representation detector (CRD.

  12. Unmixing hyperspectral images using Markov random fields

    International Nuclear Information System (INIS)

    Eches, Olivier; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2011-01-01

    This paper proposes a new spectral unmixing strategy based on the normal compositional model that exploits the spatial correlations between the image pixels. The pure materials (referred to as endmembers) contained in the image are assumed to be available (they can be obtained by using an appropriate endmember extraction algorithm), while the corresponding fractions (referred to as abundances) are estimated by the proposed algorithm. Due to physical constraints, the abundances have to satisfy positivity and sum-to-one constraints. The image is divided into homogeneous distinct regions having the same statistical properties for the abundance coefficients. The spatial dependencies within each class are modeled thanks to Potts-Markov random fields. Within a Bayesian framework, prior distributions for the abundances and the associated hyperparameters are introduced. A reparametrization of the abundance coefficients is proposed to handle the physical constraints (positivity and sum-to-one) inherent to hyperspectral imagery. The parameters (abundances), hyperparameters (abundance mean and variance for each class) and the classification map indicating the classes of all pixels in the image are inferred from the resulting joint posterior distribution. To overcome the complexity of the joint posterior distribution, Markov chain Monte Carlo methods are used to generate samples asymptotically distributed according to the joint posterior of interest. Simulations conducted on synthetic and real data are presented to illustrate the performance of the proposed algorithm.

  13. MWIR hyperspectral imaging with the MIDAS instrument

    Science.gov (United States)

    Honniball, Casey I.; Wright, Rob; Lucey, Paul G.

    2017-02-01

    Hyperspectral imaging (HSI) in the Mid-Wave InfraRed (MWIR, 3-5 microns) can provide information on a variety of science applications from determining the chemical composition of lava lakes on Jupiter's moon Io, to investigating the amount of carbon liberated into the Earth's atmosphere during a wildfire. The limited signal available in the MWIR presents technical challenges to achieving high signal-to-noise ratios, and therefore it is typically necessary to cryogenically cool MWIR instruments. With recent improvements in microbolometer technology and emerging interferometric techniques, we have shown that uncooled microbolometers coupled with a Sagnac interferometer can achieve high signal-to-noise ratios for long-wave infrared HSI. To explore if this technique can be applied to the MWIR, this project, with funding from NASA, has built the Miniaturized Infrared Detector of Atmospheric Species (MIDAS). Standard characterization tests are used to compare MIDAS against a cryogenically cooled photon detector to evaluate the MIDAS instruments' ability to quantify gas concentrations. Atmospheric radiative transfer codes are in development to explore the limitations of MIDAS and identify the range of science objectives that MIDAS will most likely excel at. We will simulate science applications with gas cells filled with varying gas concentrations and varying source temperatures to verify our results from lab characterization and our atmospheric modeling code.

  14. Evaluation of Algorithms for Compressing Hyperspectral Data

    Science.gov (United States)

    Cook, Sid; Harsanyi, Joseph; Faber, Vance

    2003-01-01

    With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.

  15. Hyperspectral aerosol optical depths from TCAP flights

    Energy Technology Data Exchange (ETDEWEB)

    Shinozuka, Yohei [NASA Ames Research Center (ARC), Moffett Field, Mountain View, CA (United States); Bay Area Environmental REsearch Institute; Johnson, Roy R [NASA Ames Research Center (ARC), Moffett Field, Mountain View, CA (United States); Flynn, Connor J [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Russell, Philip B [NASA Ames Research Center (ARC), Moffett Field, Mountain View, CA (United States); Schmid, Beat [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-06-01

    4STAR (Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research), a hyperspectral airborne sunphotometer, acquired aerosol optical depths (AOD) at 1 Hz during all July 2012 flights of the Two Column Aerosol Project (TCAP). Root-mean-square differences from AERONET ground-based observations were 0.01 at wavelengths between 500-1020 nm, 0.02 at 380 and 1640 nm and 0.03 at 440 nm in four clear-sky fly-over events, and similar in ground side-by-side comparisons. Changes in the above-aircraft AOD across 3- km-deep spirals were typically consistent with integrals of coincident in situ (on DOE Gulfstream 1 with 4STAR) and lidar (on NASA B200) extinction measurements within 0.01, 0.03, 0.01, 0.02, 0.02, 0.02 at 355, 450, 532, 550, 700, 1064 nm, respectively, despite atmospheric variations and combined measurement uncertainties. Finer vertical differentials of the 4STAR measurements matched the in situ ambient extinction profile within 14% for one homogeneous column. For the AOD observed between 350-1660 nm, excluding strong

  16. Resolving Mixed Algal Species in Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Mehrube Mehrubeoglu

    2013-12-01

    Full Text Available We investigated a lab-based hyperspectral imaging system’s response from pure (single and mixed (two algal cultures containing known algae types and volumetric combinations to characterize the system’s performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert’s law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements.

  17. High Throughput System for Plant Height and Hyperspectral Measurement

    Science.gov (United States)

    Zhao, H.; Xu, L.; Jiang, H.; Shi, S.; Chen, D.

    2018-04-01

    Hyperspectral and three-dimensional measurement can obtain the intrinsic physicochemical properties and external geometrical characteristics of objects, respectively. Currently, a variety of sensors are integrated into a system to collect spectral and morphological information in agriculture. However, previous experiments were usually performed with several commercial devices on a single platform. Inadequate registration and synchronization among instruments often resulted in mismatch between spectral and 3D information of the same target. And narrow field of view (FOV) extends the working hours in farms. Therefore, we propose a high throughput prototype that combines stereo vision and grating dispersion to simultaneously acquire hyperspectral and 3D information.

  18. Detecting brain tumor in pathological slides using hyperspectral imaging.

    Science.gov (United States)

    Ortega, Samuel; Fabelo, Himar; Camacho, Rafael; de la Luz Plaza, María; Callicó, Gustavo M; Sarmiento, Roberto

    2018-02-01

    Hyperspectral imaging (HSI) is an emerging technology for medical diagnosis. This research work presents a proof-of-concept on the use of HSI data to automatically detect human brain tumor tissue in pathological slides. The samples, consisting of hyperspectral cubes collected from 400 nm to 1000 nm, were acquired from ten different patients diagnosed with high-grade glioma. Based on the diagnosis provided by pathologists, a spectral library of normal and tumor tissues was created and processed using three different supervised classification algorithms. Results prove that HSI is a suitable technique to automatically detect high-grade tumors from pathological slides.

  19. Bread Water Content Measurement Based on Hyperspectral Imaging

    DEFF Research Database (Denmark)

    Liu, Zhi; Møller, Flemming

    2011-01-01

    Water content is one of the most important properties of the bread for tasting assesment or store monitoring. Traditional bread water content measurement methods mostly are processed manually, which is destructive and time consuming. This paper proposes an automated water content measurement...... for bread quality based on near-infrared hyperspectral imaging against the conventional manual loss-in-weight method. For this purpose, the hyperspectral components unmixing technology is used for measuring the water content quantitatively. And the definition on bread water content index is presented...

  20. Hyperspectral imaging using the single-pixel Fourier transform technique

    Science.gov (United States)

    Jin, Senlin; Hui, Wangwei; Wang, Yunlong; Huang, Kaicheng; Shi, Qiushuai; Ying, Cuifeng; Liu, Dongqi; Ye, Qing; Zhou, Wenyuan; Tian, Jianguo

    2017-03-01

    Hyperspectral imaging technology is playing an increasingly important role in the fields of food analysis, medicine and biotechnology. To improve the speed of operation and increase the light throughput in a compact equipment structure, a Fourier transform hyperspectral imaging system based on a single-pixel technique is proposed in this study. Compared with current imaging spectrometry approaches, the proposed system has a wider spectral range (400-1100 nm), a better spectral resolution (1 nm) and requires fewer measurement data (a sample rate of 6.25%). The performance of this system was verified by its application to the non-destructive testing of potatoes.

  1. HIGH THROUGHPUT SYSTEM FOR PLANT HEIGHT AND HYPERSPECTRAL MEASUREMENT

    Directory of Open Access Journals (Sweden)

    H. Zhao

    2018-04-01

    Full Text Available Hyperspectral and three-dimensional measurement can obtain the intrinsic physicochemical properties and external geometrical characteristics of objects, respectively. Currently, a variety of sensors are integrated into a system to collect spectral and morphological information in agriculture. However, previous experiments were usually performed with several commercial devices on a single platform. Inadequate registration and synchronization among instruments often resulted in mismatch between spectral and 3D information of the same target. And narrow field of view (FOV extends the working hours in farms. Therefore, we propose a high throughput prototype that combines stereo vision and grating dispersion to simultaneously acquire hyperspectral and 3D information.

  2. Comparing transformation methods for DNA microarray data

    Directory of Open Access Journals (Sweden)

    Zwinderman Aeilko H

    2004-06-01

    Full Text Available Abstract Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects, and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.

  3. Survey of Hyperspectral Earth Observation Applications from Space in the Sentinel-2 Context

    Directory of Open Access Journals (Sweden)

    Julie Transon

    2018-01-01

    Full Text Available In the last few decades, researchers have developed a plethora of hyperspectral Earth Observation (EO remote sensing techniques, analysis and applications. While hyperspectral exploratory sensors are demonstrating their potential, Sentinel-2 multispectral satellite remote sensing is now providing free, open, global and systematic high resolution visible and infrared imagery at a short revisit time. Its recent launch suggests potential synergies between multi- and hyper-spectral data. This study, therefore, reviews 20 years of research and applications in satellite hyperspectral remote sensing through the analysis of Earth observation hyperspectral sensors’ publications that cover the Sentinel-2 spectrum range: Hyperion, TianGong-1, PRISMA, HISUI, EnMAP, Shalom, HyspIRI and HypXIM. More specifically, this study (i brings face to face past and future hyperspectral sensors’ applications with Sentinel-2’s and (ii analyzes the applications’ requirements in terms of spatial and temporal resolutions. Eight main application topics were analyzed including vegetation, agriculture, soil, geology, urban, land use, water resources and disaster. Medium spatial resolution, long revisit time and low signal-to-noise ratio in the short-wave infrared of some hyperspectral sensors were highlighted as major limitations for some applications compared to the Sentinel-2 system. However, these constraints mainly concerned past hyperspectral sensors, while they will probably be overcome by forthcoming instruments. Therefore, this study is putting forward the compatibility of hyperspectral sensors and Sentinel-2 systems for resolution enhancement techniques in order to increase the panel of hyperspectral uses.

  4. Hyperspectral image reconstruction using RGB color for foodborne pathogen detection on agar plates

    Science.gov (United States)

    Yoon, Seung-Chul; Shin, Tae-Sung; Park, Bosoon; Lawrence, Kurt C.; Heitschmidt, Gerald W.

    2014-03-01

    This paper reports the latest development of a color vision technique for detecting colonies of foodborne pathogens grown on agar plates with a hyperspectral image classification model that was developed using full hyperspectral data. The hyperspectral classification model depended on reflectance spectra measured in the visible and near-infrared spectral range from 400 and 1,000 nm (473 narrow spectral bands). Multivariate regression methods were used to estimate and predict hyperspectral data from RGB color values. The six representative non-O157 Shiga-toxin producing Eschetichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) were grown on Rainbow agar plates. A line-scan pushbroom hyperspectral image sensor was used to scan 36 agar plates grown with pure STEC colonies at each plate. The 36 hyperspectral images of the agar plates were divided in half to create training and test sets. The mean Rsquared value for hyperspectral image estimation was about 0.98 in the spectral range between 400 and 700 nm for linear, quadratic and cubic polynomial regression models and the detection accuracy of the hyperspectral image classification model with the principal component analysis and k-nearest neighbors for the test set was up to 92% (99% with the original hyperspectral images). Thus, the results of the study suggested that color-based detection may be viable as a multispectral imaging solution without much loss of prediction accuracy compared to hyperspectral imaging.

  5. NOAA-9 Earth Radiation Budget Experiment (ERBE) scanner offsets determination

    Science.gov (United States)

    Avis, Lee M.; Paden, Jack; Lee, Robert B., III; Pandey, Dhirendra K.; Stassi, Joseph C.; Wilson, Robert S.; Tolson, Carol J.; Bolden, William C.

    1994-01-01

    The Earth Radiation Budget Experiment (ERBE) instruments are designed to measure the components of the radiative exchange between the Sun, Earth and space. ERBE is comprised of three spacecraft, each carrying a nearly identical set of radiometers: a three-channel narrow-field-of-view scanner, a two-channel wide-field-of-view (limb-to-limb) non-scanning radiometer, a two-channel medium field-of view (1000 km) non-scanning radiometer, and a solar monitor. Ground testing showed the scanners to be susceptible to self-generated and externally generated electromagnetic noise. This paper describes the pre-launch corrective measures taken and the post-launch corrections to the NOAA-9 scanner data. The NOAA-9 scanner has met the mission objectives in accuracy and precision, in part because of the pre-launch reductions of and post-launch data corrections for the electromagnetic noise.

  6. Landsat 1-5 Multispectral Scanner V1

    Data.gov (United States)

    National Aeronautics and Space Administration — Abstract: The Landsat Multispectral Scanner (MSS) was a sensor onboard Landsats 1 through 5 and acquired images of the Earth nearly continuously from July 1972 to...

  7. Shared probe design and existing microarray reanalysis using PICKY

    Directory of Open Access Journals (Sweden)

    Chou Hui-Hsien

    2010-04-01

    Full Text Available Abstract Background Large genomes contain families of highly similar genes that cannot be individually identified by microarray probes. This limitation is due to thermodynamic restrictions and cannot be resolved by any computational method. Since gene annotations are updated more frequently than microarrays, another common issue facing microarray users is that existing microarrays must be routinely reanalyzed to determine probes that are still useful with respect to the updated annotations. Results PICKY 2.0 can design shared probes for sets of genes that cannot be individually identified using unique probes. PICKY 2.0 uses novel algorithms to track sharable regions among genes and to strictly distinguish them from other highly similar but nontarget regions during thermodynamic comparisons. Therefore, PICKY does not sacrifice the quality of shared probes when choosing them. The latest PICKY 2.1 includes the new capability to reanalyze existing microarray probes against updated gene sets to determine probes that are still valid to use. In addition, more precise nonlinear salt effect estimates and other improvements are added, making PICKY 2.1 more versatile to microarray users. Conclusions Shared probes allow expressed gene family members to be detected; this capability is generally more desirable than not knowing anything about these genes. Shared probes also enable the design of cross-genome microarrays, which facilitate multiple species identification in environmental samples. The new nonlinear salt effect calculation significantly increases the precision of probes at a lower buffer salt concentration, and the probe reanalysis function improves existing microarray result interpretations.

  8. A Critical Perspective On Microarray Breast Cancer Gene Expression Profiling

    NARCIS (Netherlands)

    Sontrop, H.M.J.

    2015-01-01

    Microarrays offer biologists an exciting tool that allows the simultaneous assessment of gene expression levels for thousands of genes at once. At the time of their inception, microarrays were hailed as the new dawn in cancer biology and oncology practice with the hope that within a decade diseases

  9. The Importance of Normalization on Large and Heterogeneous Microarray Datasets

    Science.gov (United States)

    DNA microarray technology is a powerful functional genomics tool increasingly used for investigating global gene expression in environmental studies. Microarrays can also be used in identifying biological networks, as they give insight on the complex gene-to-gene interactions, ne...

  10. The application of DNA microarrays in gene expression analysis

    NARCIS (Netherlands)

    Hal, van N.L.W.; Vorst, O.; Houwelingen, van A.M.M.L.; Kok, E.J.; Peijnenburg, A.A.C.M.; Aharoni, A.; Tunen, van A.J.; Keijer, J.

    2000-01-01

    DNA microarray technology is a new and powerful technology that will substantially increase the speed of molecular biological research. This paper gives a survey of DNA microarray technology and its use in gene expression studies. The technical aspects and their potential improvements are discussed.

  11. Design Optimization of a TOF, Breast PET Scanner

    OpenAIRE

    Lee, Eunsin; Werner, Matthew E.; Karp, Joel S.; Surti, Suleman

    2013-01-01

    A dedicated breast positron emission tomography (PET) scanner with limited angle geometry can provide flexibility in detector placement around the patient as well as the ability to combine it with other imaging modalities. A primary challenge of a stationary limited angle scanner is the reduced image quality due to artifacts present in the reconstructed image leading to a loss in quantitative information. Previously it has been shown that using time-of-flight (TOF) information in image recons...

  12. A fast ADC scanner for multiparameter nuclear physics experiments

    International Nuclear Information System (INIS)

    Midttun, G.; Ingebretsen, F.; Holt, K.; Skaali, B.

    1983-04-01

    A fast readout system for multiparameter experiments in nuclear physics is described. The central part of the CAMAC aquisition hardware is an ADC scanner module. The scanner incorporates a new arbitration logic and direct memory access for simultaneous transfer of singles and correlated data. Together with specially designed ADC interfaces the system can be set up for any configuration of singles and multiparameter events from 1 up to 15 ADC's in one crate

  13. A fast ADC scanner for multiparameter nuclear physics experiments

    International Nuclear Information System (INIS)

    Midttun, G.; Holt, K.; Ingebretsen, F.; Skaali, B.

    1983-01-01

    A fast readout system for multiparameter experiments in nuclear physics is described. The central part of the CAMAC aquisition hardware is an ADC scanner module. The scanner incorporates a new arbitration logic and direct memory access for simultaneous transfer of singles and correlated data. Together with specially designed ADC interfaces the system can be set up for any configurations of singles and multiparameter events from 1 up to 15 ADC's in one crate

  14. Uses of Dendrimers for DNA Microarrays

    Directory of Open Access Journals (Sweden)

    Jean-Pierre Majoral

    2006-08-01

    Full Text Available Biosensors such as DNA microarrays and microchips are gaining an increasingimportance in medicinal, forensic, and environmental analyses. Such devices are based onthe detection of supramolecular interactions called hybridizations that occur betweencomplementary oligonucleotides, one linked to a solid surface (the probe, and the other oneto be analyzed (the target. This paper focuses on the improvements that hyperbranched andperfectly defined nanomolecules called dendrimers can provide to this methodology. Twomain uses of dendrimers for such purpose have been described up to now; either thedendrimer is used as linker between the solid surface and the probe oligonucleotide, or thedendrimer is used as a multilabeled entity linked to the target oligonucleotide. In the firstcase the dendrimer generally induces a higher loading of probes and an easier hybridization,due to moving away the solid phase. In the second case the high number of localized labels(generally fluorescent induces an increased sensitivity, allowing the detection of smallquantities of biological entities.

  15. Hyperspectral remote sensing of wild oyster reefs

    Science.gov (United States)

    Le Bris, Anthony; Rosa, Philippe; Lerouxel, Astrid; Cognie, Bruno; Gernez, Pierre; Launeau, Patrick; Robin, Marc; Barillé, Laurent

    2016-04-01

    The invasion of the wild oyster Crassostrea gigas along the western European Atlantic coast has generated changes in the structure and functioning of intertidal ecosystems. Considered as an invasive species and a trophic competitor of the cultivated conspecific oyster, it is now seen as a resource by oyster farmers following recurrent mass summer mortalities of oyster spat since 2008. Spatial distribution maps of wild oyster reefs are required by local authorities to help define management strategies. In this work, visible-near infrared (VNIR) hyperspectral and multispectral remote sensing was investigated to map two contrasted intertidal reef structures: clusters of vertical oysters building three-dimensional dense reefs in muddy areas and oysters growing horizontally creating large flat reefs in rocky areas. A spectral library, collected in situ for various conditions with an ASD spectroradiometer, was used to run Spectral Angle Mapper classifications on airborne data obtained with an HySpex sensor (160 spectral bands) and SPOT satellite HRG multispectral data (3 spectral bands). With HySpex spectral/spatial resolution, horizontal oysters in the rocky area were correctly classified but the detection was less efficient for vertical oysters in muddy areas. Poor results were obtained with the multispectral image and from spatially or spectrally degraded HySpex data, it was clear that the spectral resolution was more important than the spatial resolution. In fact, there was a systematic mud deposition on shells of vertical oyster reefs explaining the misclassification of 30% of pixels recognized as mud or microphytobenthos. Spatial distribution maps of oyster reefs were coupled with in situ biomass measurements to illustrate the interest of a remote sensing product to provide stock estimations of wild oyster reefs to be exploited by oyster producers. This work highlights the interest of developing remote sensing techniques for aquaculture applications in coastal

  16. ALGORITMA ESTIMASI KANDUNGAN KLOROFIL TANAMAN PADI DENGAN DATA AIRBORNE HYPERSPECTRAL

    Directory of Open Access Journals (Sweden)

    Abdi Sukmono

    2015-02-01

    Full Text Available Klorofil merupakan pigmen yang paling penting dalam proses fotosintesis. Tanaman sehat yang mampu tumbuh maksimum umumnya  memiliki jumlah klorofil yang lebih besar daripada tanaman yang tidak sehat. Dalam Estimasi kandungan klorofil tanaman padi dengan airborne hyperspectral dibutuhkan algoritma khusus untuk mendaaptkan akurasi yang baik. Objek dari penelitian ini mengembangkan reflektan in situ menjadi model algoritma   estimasi kandungan klorofil tanaman padi untuk airborne hyperspectral.  Dalam penelitian ini beberapa indeks vegetasi seperti normalized difference vegetation index (NDVI, modified simple ratio (MSR  , modified/transformed chlorophyll absorption ratio index (MCARI, TCARI dan bentuk integrasi (MCARI/OSAVI and TCARI/OSAVI digunakan untuk membentuk model estimasi dengan metode regresi linear. Selain itu juga digunakan  Blue/Green/Yellow/Red Edge Absorption Clhorophyll Index. Dari proses regresi di dapatkan tiga ground model yang mempunyai korelasi kuat (R2≥0.5 terhadap klorofil tanaman padi. Ketiga model tersebut yaitu MSR (705,750 dengan R2 sebesar 0.51, TCARI/OSAVI (705, 750 dengan R2 sebesar 0.52 dan REACL 2 dengan R2 sebesar 0.57. Dari ketiga tersebut dipilih groun model terbaik REACL 2 untuk di upscalling ke model algoritma airborne hyperspectral.  Pembentukan algoritma dengan data airborne hyperspectral sensor Hymap dan REACL 2 menghasilkan model algoritma ( Klorofil (SPAD unit = 3.031((B22-B18/(B18-B13 + 31.596 dengan R2 sebesar 0.78

  17. Characterisation of a Tunisian coastal lagoon through hyperspectral ...

    African Journals Online (AJOL)

    In 2008 an optical procedure was developed and applied in Ghar El Melh, a Tunisian lagoon which has been increasingly impacted by pollutant loading, especially from agriculture. In situ hyperspectral irradiance was measured at several stations, from which the apparent optical properties (AOPs), namely the irradiance ...

  18. Objective Color Classification of Ecstasy Tablets by Hyperspectral Imaging

    NARCIS (Netherlands)

    Edelman, Gerda; Lopatka, Martin; Aalders, Maurice

    2013-01-01

    The general procedure followed in the examination of ecstasy tablets for profiling purposes includes a color description, which depends highly on the observers' perception. This study aims to provide objective quantitative color information using visible hyperspectral imaging. Both self-manufactured

  19. The challenges of analysing blood stains with hyperspectral imaging

    Science.gov (United States)

    Kuula, J.; Puupponen, H.-H.; Rinta, H.; Pölönen, I.

    2014-06-01

    Hyperspectral imaging is a potential noninvasive technology for detecting, separating and identifying various substances. In the forensic and military medicine and other CBRNE related use it could be a potential method for analyzing blood and for scanning other human based fluids. For example, it would be valuable to easily detect whether some traces of blood are from one or more persons or if there are some irrelevant substances or anomalies in the blood. This article represents an experiment of separating four persons' blood stains on a white cotton fabric with a SWIR hyperspectral camera and FT-NIR spectrometer. Each tested sample includes standardized 75 _l of 100 % blood. The results suggest that on the basis of the amount of erythrocytes in the blood, different people's blood might be separable by hyperspectral analysis. And, referring to the indication given by erythrocytes, there might be a possibility to find some other traces in the blood as well. However, these assumptions need to be verified with wider tests, as the number of samples in the study was small. According to the study there also seems to be several biological, chemical and physical factors which affect alone and together on the hyperspectral analyzing results of blood on fabric textures, and these factors need to be considered before making any further conclusions on the analysis of blood on various materials.

  20. Hyperspectral remote sensing of canopy biodiversity in Hawaiian lowland rainforests

    Science.gov (United States)

    Kimberly M. Carlson; Gregory P. Asner; R. Flint Hughes; Rebecca Ostertag; Roberta E. Martin

    2007-01-01

    Mapping biological diversity is a high priority for conservation research, management and policy development, but few studies have provided diversity data at high spatial resolution from remote sensing. We used airborne imaging spectroscopy to map woody vascular plant species richness in lowland tropical forest ecosystems in Hawaii. Hyperspectral signatures spanning...

  1. Bathymetry from fusion of airborne hyperspectral and laser data

    Science.gov (United States)

    Kappus, Mary E.; Davis, Curtiss O.; Rhea, W. Joseph

    1998-10-01

    Airborne hyperspectral and nadir-viewing laser data can be combined to ascertain shallow water bathymetry. The combination emphasizes the advances and overcomes the disadvantages of each method used alone. For laser systems, both the hardware and software for obtaining off-nadir measurement are complicated and expensive, while for the nadir view the conversion of laser pulse travel time to depth is straightforward. The hyperspectral systems can easily collect data in a full swath, but interpretation for water depth requires careful calibration and correction for transmittance through the atmosphere and water. Relative depths are apparent in displays of several subsets of hyperspectral data, for example, single blue-green wavelengths, endmembers that represent the pure water component of the data, or ratios of deep to shallow water endmembers. A relationship between one of these values and the depth measured by the aligned nadir laser can be determined, and then applied to the rest of the swath to obtain depth in physical units for the entire area covered. We demonstrate this technique using bathymetric charts as a proxy for laser data, and hyperspectral data taken by AVIRIS over Lake Tahoe and Key West.

  2. Identifying saltcedar with hyperspectral data and support vector machines

    Science.gov (United States)

    Saltcedar (Tamarix spp.) are a group of dense phreatophytic shrubs and trees that are invasive to riparian areas throughout the United States. This study determined the feasibility of using hyperspectral data and a support vector machine (SVM) classifier to discriminate saltcedar from other cover t...

  3. Nitrogen concentration estimation with hyperspectral LiDAR

    Directory of Open Access Journals (Sweden)

    O. Nevalainen

    2013-10-01

    Full Text Available Agricultural lands have strong impact on global carbon dynamics and nitrogen availability. Monitoring changes in agricultural lands require more efficient and accurate methods. The first prototype of a full waveform hyperspectral Light Detection and Ranging (LiDAR instrument has been developed at the Finnish Geodetic Institute (FGI. The instrument efficiently combines the benefits of passive and active remote sensing sensors. It is able to produce 3D point clouds with spectral information included for every point which offers great potential in the field of remote sensing of environment. This study investigates the performance of the hyperspectral LiDAR instrument in nitrogen estimation. The investigation was conducted by finding vegetation indices sensitive to nitrogen concentration using hyperspectral LiDAR data and validating their performance in nitrogen estimation. The nitrogen estimation was performed by calculating 28 published vegetation indices to ten oat samples grown in different fertilization conditions. Reference data was acquired by laboratory nitrogen concentration analysis. The performance of the indices in nitrogen estimation was determined by linear regression and leave-one-out cross-validation. The results indicate that the hyperspectral LiDAR instrument holds a good capability to estimate plant biochemical parameters such as nitrogen concentration. The instrument holds much potential in various environmental applications and provides a significant improvement to the remote sensing of environment.

  4. Biologically-inspired data decorrelation for hyper-spectral imaging

    Directory of Open Access Journals (Sweden)

    Ghita Ovidiu

    2011-01-01

    Full Text Available Abstract Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA, linear discriminant analysis (LDA, wavelet decomposition (WD, or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification

  5. Classification of Salmonella serotypes with hyperspectral microscope imagery

    Science.gov (United States)

    Previous research has demonstrated an optical method with acousto-optic tunable filter (AOTF) based hyperspectral microscope imaging (HMI) had potential for classifying gram-negative from gram-positive foodborne pathogenic bacteria rapidly and nondestructively with a minimum sample preparation. In t...

  6. Hyperspectral Imaging Sensors and the Marine Coastal Zone

    Science.gov (United States)

    Richardson, Laurie L.

    2000-01-01

    Hyperspectral imaging sensors greatly expand the potential of remote sensing to assess, map, and monitor marine coastal zones. Each pixel in a hyperspectral image contains an entire spectrum of information. As a result, hyperspectral image data can be processed in two very different ways: by image classification techniques, to produce mapped outputs of features in the image on a regional scale; and by use of spectral analysis of the spectral data embedded within each pixel of the image. The latter is particularly useful in marine coastal zones because of the spectral complexity of suspended as well as benthic features found in these environments. Spectral-based analysis of hyperspectral (AVIRIS) imagery was carried out to investigate a marine coastal zone of South Florida, USA. Florida Bay is a phytoplankton-rich estuary characterized by taxonomically distinct phytoplankton assemblages and extensive seagrass beds. End-member spectra were extracted from AVIRIS image data corresponding to ground-truth sample stations and well-known field sites. Spectral libraries were constructed from the AVIRIS end-member spectra and used to classify images using the Spectral Angle Mapper (SAM) algorithm, a spectral-based approach that compares the spectrum, in each pixel of an image with each spectrum in a spectral library. Using this approach different phytoplankton assemblages containing diatoms, cyanobacteria, and green microalgae, as well as benthic community (seagrasses), were mapped.

  7. Enabling Searches on Wavelengths in a Hyperspectral Indices Database

    Science.gov (United States)

    Piñuela, F.; Cerra, D.; Müller, R.

    2017-10-01

    Spectral indices derived from hyperspectral reflectance measurements are powerful tools to estimate physical parameters in a non-destructive and precise way for several fields of applications, among others vegetation health analysis, coastal and deep water constituents, geology, and atmosphere composition. In the last years, several micro-hyperspectral sensors have appeared, with both full-frame and push-broom acquisition technologies, while in the near future several hyperspectral spaceborne missions are planned to be launched. This is fostering the use of hyperspectral data in basic and applied research causing a large number of spectral indices to be defined and used in various applications. Ad hoc search engines are therefore needed to retrieve the most appropriate indices for a given application. In traditional systems, query input parameters are limited to alphanumeric strings, while characteristics such as spectral range/ bandwidth are not used in any existing search engine. Such information would be relevant, as it enables an inverse type of search: given the spectral capabilities of a given sensor or a specific spectral band, find all indices which can be derived from it. This paper describes a tool which enables a search as described above, by using the central wavelength or spectral range used by a given index as a search parameter. This offers the ability to manage numeric wavelength ranges in order to select indices which work at best in a given set of wavelengths or wavelength ranges.

  8. Analysis of hyperspectral fluorescence images for poultry skin tumor inspection

    Science.gov (United States)

    Kong, Seong G.; Chen, Yud-Ren; Kim, Intaek; Kim, Moon S.

    2004-02-01

    We present a hyperspectral fluorescence imaging system with a fuzzy inference scheme for detecting skin tumors on poultry carcasses. Hyperspectral images reveal spatial and spectral information useful for finding pathological lesions or contaminants on agricultural products. Skin tumors are not obvious because the visual signature appears as a shape distortion rather than a discoloration. Fluorescence imaging allows the visualization of poultry skin tumors more easily than reflectance. The hyperspectral image samples obtained for this poultry tumor inspection contain 65 spectral bands of fluorescence in the visible region of the spectrum at wavelengths ranging from 425 to 711 nm. The large amount of hyperspectral image data is compressed by use of a discrete wavelet transform in the spatial domain. Principal-component analysis provides an effective compressed representation of the spectral signal of each pixel in the spectral domain. A small number of significant features are extracted from two major spectral peaks of relative fluorescence intensity that have been identified as meaningful spectral bands for detecting tumors. A fuzzy inference scheme that uses a small number of fuzzy rules and Gaussian membership functions successfully detects skin tumors on poultry carcasses. Spatial-filtering techniques are used to significantly reduce false positives.

  9. Recent Advances in Techniques for Hyperspectral Image Processing

    Science.gov (United States)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; hide

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  10. Hyperspectral Imaging of Forest Resources: The Malaysian Experience

    Science.gov (United States)

    Mohd Hasmadi, I.; Kamaruzaman, J.

    2008-08-01

    Remote sensing using satellite and aircraft images are well established technology. Remote sensing application of hyperspectral imaging, however, is relatively new to Malaysian forestry. Through a wide range of wavelengths hyperspectral data are precisely capable to capture narrow bands of spectra. Airborne sensors typically offer greatly enhanced spatial and spectral resolution over their satellite counterparts, and able to control experimental design closely during image acquisition. The first study using hyperspectral imaging for forest inventory in Malaysia were conducted by Professor Hj. Kamaruzaman from the Faculty of Forestry, Universiti Putra Malaysia in 2002 using the AISA sensor manufactured by Specim Ltd, Finland. The main objective has been to develop methods that are directly suited for practical tropical forestry application at the high level of accuracy. Forest inventory and tree classification including development of single spectral signatures have been the most important interest at the current practices. Experiences from the studies showed that retrieval of timber volume and tree discrimination using this system is well and some or rather is better than other remote sensing methods. This article reviews the research and application of airborne hyperspectral remote sensing for forest survey and assessment in Malaysia.

  11. Remote sensing of soil moisture using airborne hyperspectral data

    Science.gov (United States)

    The Institute for Technology Development (ITD) has developed an airborne hyperspectral sensor system that collects electromagnetic reflectance data of the terrain. The system consists of sensors for three different sections of the electromagnetic spectrum; the Ultra-Violet (UV), Visible/Near Infrare...

  12. a Hyperspectral Image Classification Method Using Isomap and Rvm

    Science.gov (United States)

    Chang, H.; Wang, T.; Fang, H.; Su, Y.

    2018-04-01

    Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.

  13. Use of Aerial Hyperspectral Imaging For Monitoring Forest Health

    Science.gov (United States)

    Milton O. Smith; Nolan J. Hess; Stephen Gulick; Lori G. Eckhardt; Roger D. Menard

    2004-01-01

    This project evaluates the effectiveness of aerial hyperspectral digital imagery in the assessment of forest health of loblolly stands in central Alabama. The imagery covers 50 square miles, in Bibb and Hale Counties, south of Tuscaloosa, AL, which includes intensive managed forest industry sites and National Forest lands with multiple use objectives. Loblolly stands...

  14. Infrared hyperspectral upconversion imaging using spatial object translation

    DEFF Research Database (Denmark)

    Kehlet, Louis Martinus; Sanders, Nicolai Højer; Tidemand-Lichtenberg, Peter

    2015-01-01

    In this paper hyperspectral imaging in the mid-infrared wavelength region is realised using nonlinear frequency upconversion. The infrared light is converted to the near-infrared region for detection with a Si-based CCD camera. The object is translated in a predefined grid by motorized actuators...

  15. Manifold regularization for sparse unmixing of hyperspectral images.

    Science.gov (United States)

    Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin

    2016-01-01

    Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.

  16. Bystander effect: Biological endpoints and microarray analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhry, M. Ahmad [Department of Medical Laboratory and Radiation Sciences, College of Nursing and Health Sciences, University of Vermont, 302 Rowell Building, Burlington, VT 05405 (United States) and DNA Microarray Facility, University of Vermont, Burlington, VT 05405 (United States)]. E-mail: mchaudhr@uvm.edu

    2006-05-11

    In cell populations exposed to ionizing radiation, the biological effects occur in a much larger proportion of cells than are estimated to be traversed by radiation. It has been suggested that irradiated cells are capable of providing signals to the neighboring unirradiated cells resulting in damage to these cells. This phenomenon is termed the bystander effect. The bystander effect induces persistent, long-term, transmissible changes that result in delayed death and neoplastic transformation. Because the bystander effect is relevant to carcinogenesis, it could have significant implications for risk estimation for radiation exposure. The nature of the bystander effect signal and how it impacts the unirradiated cells remains to be elucidated. Examination of the changes in gene expression could provide clues to understanding the bystander effect and could define the signaling pathways involved in sustaining damage to these cells. The microarray technology serves as a tool to gain insight into the molecular pathways leading to bystander effect. Using medium from irradiated normal human diploid lung fibroblasts as a model system we examined gene expression alterations in bystander cells. The microarray data revealed that the radiation-induced gene expression profile in irradiated cells is different from unirradiated bystander cells suggesting that the pathways leading to biological effects in the bystander cells are different from the directly irradiated cells. The genes known to be responsive to ionizing radiation were observed in irradiated cells. Several genes were upregulated in cells receiving media from irradiated cells. Surprisingly no genes were found to be downregulated in these cells. A number of genes belonging to extracellular signaling, growth factors and several receptors were identified in bystander cells. Interestingly 15 genes involved in the cell communication processes were found to be upregulated. The induction of receptors and the cell

  17. Bystander effect: Biological endpoints and microarray analysis

    International Nuclear Information System (INIS)

    Chaudhry, M. Ahmad

    2006-01-01

    In cell populations exposed to ionizing radiation, the biological effects occur in a much larger proportion of cells than are estimated to be traversed by radiation. It has been suggested that irradiated cells are capable of providing signals to the neighboring unirradiated cells resulting in damage to these cells. This phenomenon is termed the bystander effect. The bystander effect induces persistent, long-term, transmissible changes that result in delayed death and neoplastic transformation. Because the bystander effect is relevant to carcinogenesis, it could have significant implications for risk estimation for radiation exposure. The nature of the bystander effect signal and how it impacts the unirradiated cells remains to be elucidated. Examination of the changes in gene expression could provide clues to understanding the bystander effect and could define the signaling pathways involved in sustaining damage to these cells. The microarray technology serves as a tool to gain insight into the molecular pathways leading to bystander effect. Using medium from irradiated normal human diploid lung fibroblasts as a model system we examined gene expression alterations in bystander cells. The microarray data revealed that the radiation-induced gene expression profile in irradiated cells is different from unirradiated bystander cells suggesting that the pathways leading to biological effects in the bystander cells are different from the directly irradiated cells. The genes known to be responsive to ionizing radiation were observed in irradiated cells. Several genes were upregulated in cells receiving media from irradiated cells. Surprisingly no genes were found to be downregulated in these cells. A number of genes belonging to extracellular signaling, growth factors and several receptors were identified in bystander cells. Interestingly 15 genes involved in the cell communication processes were found to be upregulated. The induction of receptors and the cell

  18. Lipid Microarray Biosensor for Biotoxin Detection.

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Anup K.; Throckmorton, Daniel J.; Moran-Mirabal, Jose C.; Edel, Joshua B.; Meyer, Grant D.; Craighead, Harold G.

    2006-05-01

    We present the use of micron-sized lipid domains, patterned onto planar substrates and within microfluidic channels, to assay the binding of bacterial toxins via total internal reflection fluorescence microscopy (TIRFM). The lipid domains were patterned using a polymer lift-off technique and consisted of ganglioside-populated DSPC:cholesterol supported lipid bilayers (SLBs). Lipid patterns were formed on the substrates by vesicle fusion followed by polymer lift-off, which revealed micron-sized SLBs containing either ganglioside GT1b or GM1. The ganglioside-populated SLB arrays were then exposed to either Cholera toxin subunit B (CTB) or Tetanus toxin fragment C (TTC). Binding was assayed on planar substrates by TIRFM down to 1 nM concentration for CTB and 100 nM for TTC. Apparent binding constants extracted from three different models applied to the binding curves suggest that binding of a protein to a lipid-based receptor is strongly affected by the lipid composition of the SLB and by the substrate on which the bilayer is formed. Patterning of SLBs inside microfluidic channels also allowed the preparation of lipid domains with different compositions on a single device. Arrays within microfluidic channels were used to achieve segregation and selective binding from a binary mixture of the toxin fragments in one device. The binding and segregation within the microfluidic channels was assayed with epifluorescence as proof of concept. We propose that the method used for patterning the lipid microarrays on planar substrates and within microfluidic channels can be easily adapted to proteins or nucleic acids and can be used for biosensor applications and cell stimulation assays under different flow conditions. KEYWORDS. Microarray, ganglioside, polymer lift-off, cholera toxin, tetanus toxin, TIRFM, binding constant.4

  19. cDNA microarray screening in food safety

    International Nuclear Information System (INIS)

    Roy, Sashwati; Sen, Chandan K.

    2006-01-01

    The cDNA microarray technology and related bioinformatics tools presents a wide range of novel application opportunities. The technology may be productively applied to address food safety. In this mini-review article, we present an update highlighting the late breaking discoveries that demonstrate the vitality of cDNA microarray technology as a tool to analyze food safety with reference to microbial pathogens and genetically modified foods. In order to bring the microarray technology to mainstream food safety, it is important to develop robust user-friendly tools that may be applied in a field setting. In addition, there needs to be a standardized process for regulatory agencies to interpret and act upon microarray-based data. The cDNA microarray approach is an emergent technology in diagnostics. Its values lie in being able to provide complimentary molecular insight when employed in addition to traditional tests for food safety, as part of a more comprehensive battery of tests

  20. Utilization pattern of whole body computed tomography scanner

    International Nuclear Information System (INIS)

    Youn, Chul Ho; Lee, Sang Suk

    1986-01-01

    Computed tomography scanner (CT scanner) is one of the most expensive and sophisticated diagnostic tool and has already been utilized in many hospitals in Korea. The price as well as operating costs of CT scanner is so expensive as to regulate its installment by government even in the United States. In order to identify the efficient utilization of the CT scanner, the utilization pattern for CT scanning was analyzed at three general hospital in seoul. The results are as follows: 1. Five out of one thousand outpatients and five out of one hundred inpatients were CT scanned. 2. Eighty percent of patients who were scanned were those of inpatients of the hospitals where the scanned are installed. 3. Head standings constitute 45.6 percent of examinations, internal medicine 63.8 percent, and 38.5 percent neurosurgery respectively. 4. The rate of indication for CT scanning showed no statistically significant difference between insured and non-insured groups. 5. Computed tomography scanner units were operated 5.5 days a week in average and full operation rate was 79.5% in average. 6. The major diagnoses mode by head scanning were: hematoma (56.7%), infarction (12.6%), tumor (8.2%), and hydrocephalus (4.4%). 7. Number of patients taken CT Scanning was 43 persons a week in average for each whole body scanner unit

  1. Versatile High Resolution Oligosaccharide Microarrays for Plant Glycobiology and Cell Wall Research

    DEFF Research Database (Denmark)

    Pedersen, Henriette Lodberg; Fangel, Jonatan Ulrik; McCleary, Barry

    2012-01-01

    Microarrays are powerful tools for high throughput analysis, and hundreds or thousands of molecular interactions can be assessed simultaneously using very small amounts of analytes. Nucleotide microarrays are well established in plant research, but carbohydrate microarrays are much less establish...

  2. A cell spot microarray method for production of high density siRNA transfection microarrays

    Directory of Open Access Journals (Sweden)

    Mpindi John-Patrick

    2011-03-01

    Full Text Available Abstract Background High-throughput RNAi screening is widely applied in biological research, but remains expensive, infrastructure-intensive and conversion of many assays to HTS applications in microplate format is not feasible. Results Here, we describe the optimization of a miniaturized cell spot microarray (CSMA method, which facilitates utilization of the transfection microarray technique for disparate RNAi analyses. To promote rapid adaptation of the method, the concept has been tested with a panel of 92 adherent cell types, including primary human cells. We demonstrate the method in the systematic screening of 492 GPCR coding genes for impact on growth and survival of cultured human prostate cancer cells. Conclusions The CSMA method facilitates reproducible preparation of highly parallel cell microarrays for large-scale gene knockdown analyses. This will be critical towards expanding the cell based functional genetic screens to include more RNAi constructs, allow combinatorial RNAi analyses, multi-parametric phenotypic readouts or comparative analysis of many different cell types.

  3. Tunable thin-film optical filters for hyperspectral microscopy

    Science.gov (United States)

    Favreau, Peter F.; Rich, Thomas C.; Prabhat, Prashant; Leavesley, Silas J.

    2013-02-01

    Hyperspectral imaging was originally developed for use in remote sensing applications. More recently, it has been applied to biological imaging systems, such as fluorescence microscopes. The ability to distinguish molecules based on spectral differences has been especially advantageous for identifying fluorophores in highly autofluorescent tissues. A key component of hyperspectral imaging systems is wavelength filtering. Each filtering technology used for hyperspectral imaging has corresponding advantages and disadvantages. Recently, a new optical filtering technology has been developed that uses multi-layered thin-film optical filters that can be rotated, with respect to incident light, to control the center wavelength of the pass-band. Compared to the majority of tunable filter technologies, these filters have superior optical performance including greater than 90% transmission, steep spectral edges and high out-of-band blocking. Hence, tunable thin-film optical filters present optical characteristics that may make them well-suited for many biological spectral imaging applications. An array of tunable thin-film filters was implemented on an inverted fluorescence microscope (TE 2000, Nikon Instruments) to cover the full visible wavelength range. Images of a previously published model, GFP-expressing endothelial cells in the lung, were acquired using a charge-coupled device camera (Rolera EM-C2, Q-Imaging). This model sample presents fluorescently-labeled cells in a highly autofluorescent environment. Linear unmixing of hyperspectral images indicates that thin-film tunable filters provide equivalent spectral discrimination to our previous acousto-optic tunable filter-based approach, with increased signal-to-noise characteristics. Hence, tunable multi-layered thin film optical filters may provide greatly improved spectral filtering characteristics and therefore enable wider acceptance of hyperspectral widefield microscopy.

  4. The French proposal for a high spatial resolution Hyperspectral mission

    Science.gov (United States)

    Carrère, Véronique; Briottet, Xavier; Jacquemoud, Stéphane; Marion, Rodolphe; Bourguignon, Anne; Chami, Malik; Chanussot, Jocelyn; Chevrel, Stéphane; Deliot, Philippe; Dumont, Marie; Foucher, Pierre-Yves; Gomez, Cécile; Roman-Minghelli, Audrey; Sheeren, David; Weber, Christiane; Lefèvre, Marie-José; Mandea, Mioara

    2014-05-01

    More than 25 years of airborne imaging spectroscopy and spaceborne sensors such as Hyperion or HICO have clearly demonstrated the ability of such a remote sensing technique to produce value added information regarding surface composition and physical properties for a large variety of applications. Scheduled missions such as EnMAP and PRISMA prove the increased interest of the scientific community for such a type of remote sensing data. In France, a group of Science and Defence users of imaging spectrometry data (Groupe de Synthèse Hyperspectral, GSH) established an up-to-date review of possible applications, define instrument specifications required for accurate, quantitative retrieval of diagnostic parameters, and identify fields of application where imaging spectrometry is a major contribution. From these conclusions, CNES (French Space Agency) decided a phase 0 study for an hyperspectral mission concept, named at this time HYPXIM (HYPerspectral-X IMagery), the main fields of applications are vegetation biodiversity, coastal and inland waters, geosciences, urban environment, atmospheric sciences, cryosphere and Defence. Results pointed out applications where high spatial resolution was necessary and would not be covered by the other foreseen hyperspectral missions. The phase A started at the beginning of 2013 based on the following HYPXIM characteristics: a hyperspectral camera covering the [0.4 - 2.5 µm] spectral range with a 8 m ground sampling distance (GSD) and a PAN camera with a 1.85 m GSD, onboard a mini-satellite platform. This phase A is currently stopped due to budget constraints. Nevertheless, the Science team is currently focusing on the preparation for the next CNES prospective meeting (March, 2014), an important step for the future of the mission. This paper will provide an update of the status of this mission and of new results obtained by the Science team.

  5. Colorectal cancer detection by hyperspectral imaging using fluorescence excitation scanning

    Science.gov (United States)

    Leavesley, Silas J.; Deal, Joshua; Hill, Shante; Martin, Will A.; Lall, Malvika; Lopez, Carmen; Rider, Paul F.; Rich, Thomas C.; Boudreaux, Carole W.

    2018-02-01

    Hyperspectral imaging technologies have shown great promise for biomedical applications. These techniques have been especially useful for detection of molecular events and characterization of cell, tissue, and biomaterial composition. Unfortunately, hyperspectral imaging technologies have been slow to translate to clinical devices - likely due to increased cost and complexity of the technology as well as long acquisition times often required to sample a spectral image. We have demonstrated that hyperspectral imaging approaches which scan the fluorescence excitation spectrum can provide increased signal strength and faster imaging, compared to traditional emission-scanning approaches. We have also demonstrated that excitation-scanning approaches may be able to detect spectral differences between colonic adenomas and adenocarcinomas and normal mucosa in flash-frozen tissues. Here, we report feasibility results from using excitation-scanning hyperspectral imaging to screen pairs of fresh tumoral and nontumoral colorectal tissues. Tissues were imaged using a novel hyperspectral imaging fluorescence excitation scanning microscope, sampling a wavelength range of 360-550 nm, at 5 nm increments. Image data were corrected to achieve a NIST-traceable flat spectral response. Image data were then analyzed using a range of supervised and unsupervised classification approaches within ENVI software (Harris Geospatial Solutions). Supervised classification resulted in >99% accuracy for single-patient image data, but only 64% accuracy for multi-patient classification (n=9 to date), with the drop in accuracy due to increased false-positive detection rates. Hence, initial data indicate that this approach may be a viable detection approach, but that larger patient sample sizes need to be evaluated and the effects of inter-patient variability studied.

  6. Real-time recursive hyperspectral sample and band processing algorithm architecture and implementation

    CERN Document Server

    Chang, Chein-I

    2017-01-01

    This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author’s books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016. Explores recursive structures in algorithm architecture Implements algorithmic recursive architecture in conjunction with progressive sample and band processing Derives Recursive Hyperspectral Sample Processing (RHSP) techniques according to Band-Interleaved Sample/Pixel (BIS/BIP) acquisition format Develops Recursive Hyperspectral Band Processing (RHBP) techniques according to Band SeQuential (BSQ) acquisition format for hyperspectral data.

  7. a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data

    Science.gov (United States)

    Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.

    2018-04-01

    Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.

  8. Survey of Hyperspectral Earth Observation Applications from Space in the Sentinel-2 Context

    OpenAIRE

    Julie Transon; Raphaël d’Andrimont; Alexandre Maugnard; Pierre Defourny

    2018-01-01

    In the last few decades, researchers have developed a plethora of hyperspectral Earth Observation (EO) remote sensing techniques, analysis and applications. While hyperspectral exploratory sensors are demonstrating their potential, Sentinel-2 multispectral satellite remote sensing is now providing free, open, global and systematic high resolution visible and infrared imagery at a short revisit time. Its recent launch suggests potential synergies between multi- and hyper-spectral data. This st...

  9. SVM Classifiers: The Objects Identification on the Base of Their Hyperspectral Features

    Directory of Open Access Journals (Sweden)

    Demidova Liliya

    2017-01-01

    Full Text Available The problem of the objects identification on the base of their hyperspectral features has been considered. It is offered to use the SVM classifiers on the base of the modified PSO algorithm, adapted to specifics of the problem of the objects identification on the base of their hyperspectral features. The results of the objects identification on the base of their hyperspectral features with using of the SVM classifiers have been presented.

  10. Scanner qualification with IntenCD based reticle error correction

    Science.gov (United States)

    Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan

    2010-03-01

    Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.

  11. Quality assurance for MR stereotactic imaging for three Siemens scanners

    International Nuclear Information System (INIS)

    Kozubikova, P.; Novotny, J. Jr.; Kulhova, K.; Mihalova, P.; Tamasova, J.; Veselsk, T.

    2014-01-01

    Quality assurance of stereotactic imaging, especially with MRI (magnetic resonance imaging), is a complex issue. It can be divided in the basic verification and commissioning of a particular new scanner or a new scanning MRI protocol that is being implemented into a clinical practice and the routine quality assurance performed for each single radiosurgical case. The aim of this study was geometric distortion assessment in MRI with a special PTGR (Physikalisch-Technische Gesellschaft fuer Radiologie - GmbH, Tuebingen, Germany) target phantom. PTGR phantom consists of 21 three-dimensional cross-hairs filled with contrast medium. Cross hairs are positioned at known Leksell coordinates with a precision of better than 0.1 mm and covering the whole stereotactic space. The phantom can be fixed in the Leksell stereotactic frame and thus stereotactic imaging procedures can be reproduced following exactly the same steps as for a real patient, including also the stereotactic image definition in the Leksell GammaPlan. Since the geometric position (stereotactic coordinates) of each cross-hair is known based on the construction of the phantom, it can be compared with the actual measured Leksell coordinates based on the stereotactic MRI. Deviations between expected and actual coordinates provide information about the level of distortion. The measured distortions proved satisfactory accuracy precision for stereotactic localization at 1.5 T Siemens Magnetom Avanto scanner, Siemens Magnetom Symphony scanner and 3T Siemens Magnetom Skyra scanner (Na Homolce Hospital, Prague). The mean distortion for these MR scanners for standard imaging protocol (T1 weighted 3D images) were 0.8 mm, 1.1 mm and 1.1 mm and maximum distortions were 1.3 mm, 1.9 mm and 2.2 mm, respectively.There was detected dependence of the distortions on the slice orientation and the type of imaging protocol. Image distortions are also property of each particular scanner, the worst distortion were observed for 3T

  12. Microintaglio Printing for Soft Lithography-Based in Situ Microarrays

    Directory of Open Access Journals (Sweden)

    Manish Biyani

    2015-07-01

    Full Text Available Advances in lithographic approaches to fabricating bio-microarrays have been extensively explored over the last two decades. However, the need for pattern flexibility, a high density, a high resolution, affordability and on-demand fabrication is promoting the development of unconventional routes for microarray fabrication. This review highlights the development and uses of a new molecular lithography approach, called “microintaglio printing technology”, for large-scale bio-microarray fabrication using a microreactor array (µRA-based chip consisting of uniformly-arranged, femtoliter-size µRA molds. In this method, a single-molecule-amplified DNA microarray pattern is self-assembled onto a µRA mold and subsequently converted into a messenger RNA or protein microarray pattern by simultaneously producing and transferring (immobilizing a messenger RNA or a protein from a µRA mold to a glass surface. Microintaglio printing allows the self-assembly and patterning of in situ-synthesized biomolecules into high-density (kilo-giga-density, ordered arrays on a chip surface with µm-order precision. This holistic aim, which is difficult to achieve using conventional printing and microarray approaches, is expected to revolutionize and reshape proteomics. This review is not written comprehensively, but rather substantively, highlighting the versatility of microintaglio printing for developing a prerequisite platform for microarray technology for the postgenomic era.

  13. Hyperspectral Soil Mapper (HYSOMA) software interface: Review and future plans

    Science.gov (United States)

    Chabrillat, Sabine; Guillaso, Stephane; Eisele, Andreas; Rogass, Christian

    2014-05-01

    With the upcoming launch of the next generation of hyperspectral satellites that will routinely deliver high spectral resolution images for the entire globe (e.g. EnMAP, HISUI, HyspIRI, HypXIM, PRISMA), an increasing demand for the availability/accessibility of hyperspectral soil products is coming from the geoscience community. Indeed, many robust methods for the prediction of soil properties based on imaging spectroscopy already exist and have been successfully used for a wide range of soil mapping airborne applications. Nevertheless, these methods require expert know-how and fine-tuning, which makes them used sparingly. More developments are needed toward easy-to-access soil toolboxes as a major step toward the operational use of hyperspectral soil products for Earth's surface processes monitoring and modelling, to allow non-experienced users to obtain new information based on non-expensive software packages where repeatability of the results is an important prerequisite. In this frame, based on the EU-FP7 EUFAR (European Facility for Airborne Research) project and EnMAP satellite science program, higher performing soil algorithms were developed at the GFZ German Research Center for Geosciences as demonstrators for end-to-end processing chains with harmonized quality measures. The algorithms were built-in into the HYSOMA (Hyperspectral SOil MApper) software interface, providing an experimental platform for soil mapping applications of hyperspectral imagery that gives the choice of multiple algorithms for each soil parameter. The software interface focuses on fully automatic generation of semi-quantitative soil maps such as soil moisture, soil organic matter, iron oxide, clay content, and carbonate content. Additionally, a field calibration option calculates fully quantitative soil maps provided ground truth soil data are available. Implemented soil algorithms have been tested and validated using extensive in-situ ground truth data sets. The source of the HYSOMA

  14. Hyperspectral Imagery Target Detection Using Improved Anomaly Detection and Signature Matching Methods

    National Research Council Canada - National Science Library

    Smetek, Timothy E

    2007-01-01

    This research extends the field of hyperspectral target detection by developing autonomous anomaly detection and signature matching methodologies that reduce false alarms relative to existing benchmark detectors...

  15. Classification of Hyperspectral or Trichromatic Measurements of Ocean Color Data into Spectral Classes

    Directory of Open Access Journals (Sweden)

    Dilip K. Prasad

    2016-03-01

    Full Text Available We propose a method for classifying radiometric oceanic color data measured by hyperspectral satellite sensors into known spectral classes, irrespective of the downwelling irradiance of the particular day, i.e., the illumination conditions. The focus is not on retrieving the inherent optical properties but to classify the pixels according to the known spectral classes of the reflectances from the ocean. The method compensates for the unknown downwelling irradiance by white balancing the radiometric data at the ocean pixels using the radiometric data of bright pixels (typically from clouds. The white-balanced data is compared with the entries in a pre-calibrated lookup table in which each entry represents the spectral properties of one class. The proposed approach is tested on two datasets of in situ measurements and 26 different daylight illumination spectra for medium resolution imaging spectrometer (MERIS, moderate-resolution imaging spectroradiometer (MODIS, sea-viewing wide field-of-view sensor (SeaWiFS, coastal zone color scanner (CZCS, ocean and land colour instrument (OLCI, and visible infrared imaging radiometer suite (VIIRS sensors. Results are also shown for CIMEL’s SeaPRISM sun photometer sensor used on-board field trips. Accuracy of more than 92% is observed on the validation dataset and more than 86% is observed on the other dataset for all satellite sensors. The potential of applying the algorithms to non-satellite and non-multi-spectral sensors mountable on airborne systems is demonstrated by showing classification results for two consumer cameras. Classification on actual MERIS data is also shown. Additional results comparing the spectra of remote sensing reflectance with level 2 MERIS data and chlorophyll concentration estimates of the data are included.

  16. Astrometric properties of the Tautenburg Plate Scanner

    Science.gov (United States)

    Brunzendorf, Jens; Meusinger, Helmut

    The Tautenburg Plate Scanner (TPS) is an advanced plate-measuring machine run by the Thüringer Landessternwarte Tautenburg (Karl Schwarzschild Observatory), where the machine is housed. It is capable of digitising photographic plates up to 30 cm × 30 cm in size. In our poster, we reported on tests and preliminary results of its astrometric properties. The essential components of the TPS consist of an x-y table movable between an illumination system and a direct imaging system. A telecentric lens images the light transmitted through the photographic emulsion onto a CCD line of 6000 pixels of 10 µm square size each. All components are mounted on a massive air-bearing table. Scanning is performed in lanes of up to 55 mm width by moving the x-y table in a continuous drift-scan mode perpendicular to the CCD line. The analogue output from the CCD is digitised to 12 bit with a total signal/noise ratio of 1000 : 1, corresponding to a photographic density range of three. The pixel map is produced as a series of optionally overlapping lane scans. The pixel data are stored onto CD-ROM or DAT. A Tautenburg Schmidt plate 24 cm × 24 cm in size is digitised within 2.5 hours resulting in 1.3 GB of data. Subsequent high-level data processing is performed off-line on other computers. During the scanning process, the geometry of the optical components is kept fixed. The optimal focussing of the optics is performed prior to the scan. Due to the telecentric lens refocussing is not required. Therefore, the main source of astrometric errors (beside the emulsion itself) are mechanical imperfections in the drive system, which have to be divided into random and systematic ones. The r.m.s. repeatability over the whole plate as measured by repeated scans of the same plate is about 0.5 µm for each axis. The mean plate-to-plate accuracy of the object positions on two plates with the same epoch and the same plate centre has been determined to be about 1 µm. This accuracy is comparable to

  17. The application of DNA microarrays in gene expression analysis.

    Science.gov (United States)

    van Hal, N L; Vorst, O; van Houwelingen, A M; Kok, E J; Peijnenburg, A; Aharoni, A; van Tunen, A J; Keijer, J

    2000-03-31

    DNA microarray technology is a new and powerful technology that will substantially increase the speed of molecular biological research. This paper gives a survey of DNA microarray technology and its use in gene expression studies. The technical aspects and their potential improvements are discussed. These comprise array manufacturing and design, array hybridisation, scanning, and data handling. Furthermore, it is discussed how DNA microarrays can be applied in the working fields of: safety, functionality and health of food and gene discovery and pathway engineering in plants.

  18. Miniaturized Fourier-plane fiber scanner for OCT endoscopy

    International Nuclear Information System (INIS)

    Vilches, Sergio; Kretschmer, Simon; Ataman, Çağlar; Zappe, Hans

    2017-01-01

    A forward-looking endoscopic optical coherence tomography (OCT) probe featuring a Fourier-plane fiber scanner is designed, manufactured and characterized. In contrast to common image-plane fiber scanners, the Fourier-plane scanner is a telecentric arrangement that eliminates vignetting and spatial resolution variations across the image plane. To scan the OCT beam in a spiral pattern, a tubular piezoelectric actuator is used to resonate an optical fiber bearing a collimating GRIN lens at its tip. The free-end of the GRIN lens sits at the back focal plane of an objective lens, such that its rotation replicates the beam angles in the collimated region of a classical telecentric 4f optical system. Such an optical arrangement inherently has a low numerical aperture combined with a relatively large field-of-view, rendering it particularly useful for endoscopic OCT imaging. Furthermore, the optical train of the Fourier-plane scanner is shorter than that of a comparable image-plane scanner by one focal length of the objective lens, significantly shortening the final arrangement. As a result, enclosed within a 3D printed housing of 2.5 mm outer diameter and 15 mm total length, the developed probe is the most compact forward-looking endoscopic OCT imager to date. Due to its compact form factor and compatibility with real-time OCT imaging, the developed probe is also ideal for use in the working channel of flexible endoscopes as a potential optical biopsy tool. (paper)

  19. Miniaturized Fourier-plane fiber scanner for OCT endoscopy

    Science.gov (United States)

    Vilches, Sergio; Kretschmer, Simon; Ataman, Çağlar; Zappe, Hans

    2017-10-01

    A forward-looking endoscopic optical coherence tomography (OCT) probe featuring a Fourier-plane fiber scanner is designed, manufactured and characterized. In contrast to common image-plane fiber scanners, the Fourier-plane scanner is a telecentric arrangement that eliminates vignetting and spatial resolution variations across the image plane. To scan the OCT beam in a spiral pattern, a tubular piezoelectric actuator is used to resonate an optical fiber bearing a collimating GRIN lens at its tip. The free-end of the GRIN lens sits at the back focal plane of an objective lens, such that its rotation replicates the beam angles in the collimated region of a classical telecentric 4f optical system. Such an optical arrangement inherently has a low numerical aperture combined with a relatively large field-of-view, rendering it particularly useful for endoscopic OCT imaging. Furthermore, the optical train of the Fourier-plane scanner is shorter than that of a comparable image-plane scanner by one focal length of the objective lens, significantly shortening the final arrangement. As a result, enclosed within a 3D printed housing of 2.5 mm outer diameter and 15 mm total length, the developed probe is the most compact forward-looking endoscopic OCT imager to date. Due to its compact form factor and compatibility with real-time OCT imaging, the developed probe is also ideal for use in the working channel of flexible endoscopes as a potential optical biopsy tool.

  20. Moths on the Flatbed Scanner: The Art of Joseph Scheer

    Directory of Open Access Journals (Sweden)

    Stephen L. Buchmann

    2011-12-01

    Full Text Available During the past decade a few artists and even fewer entomologists discovered flatbed scanning technology, using extreme resolution graphical arts scanners for acquiring high magnification digital images of plants, animals and inanimate objects. They are not just for trip receipts anymore. The special attributes of certain scanners, to image thick objects is discussed along with the technical features of the scanners including magnification, color depth and shadow detail. The work of pioneering scanner artist, Joseph Scheer from New York’s Alfred University is highlighted. Representative flatbed-scanned images of moths are illustrated along with techniques to produce them. Collecting and preparing moths, and other objects, for scanning are described. Highlights of the Fulbright sabbatical year of professor Scheer in Arizona and Sonora, Mexico are presented, along with comments on moths in science, folklore, art and pop culture. The use of flatbed scanners is offered as a relatively new method for visualizing small objects while acquiring large files for creating archival inkjet prints for display and sale.

  1. Characterization of a Large, Low-Cost 3D Scanner

    Directory of Open Access Journals (Sweden)

    Jeremy Straub

    2015-01-01

    Full Text Available Imagery-based 3D scanning can be performed by scanners with multiple form factors, ranging from small and inexpensive scanners requiring manual movement around a stationary object to large freestanding (nearly instantaneous units. Small mobile units are problematic for use in scanning living creatures, which may be unwilling or unable to (or for the very young and animals, unaware of the need to hold a fixed position for an extended period of time. Alternately, very high cost scanners that can capture a complete scan within a few seconds are available, but they are cost prohibitive for some applications. This paper seeks to assess the performance of a large, low-cost 3D scanner, presented in prior work, which is able to concurrently capture imagery from all around an object. It provides the capabilities of the large, freestanding units at a price point akin to the smaller, mobile ones. This allows access to 3D scanning technology (particularly for applications requiring instantaneous imaging at a lower cost. Problematically, prior analysis of the scanner’s performance was extremely limited. This paper characterizes the efficacy of the scanner for scanning both inanimate objects and humans. Given the importance of lighting to visible light scanning systems, the scanner’s performance under multiple lighting configurations is evaluated, characterizing its sensitivity to lighting design.

  2. Tissue Microarray TechnologyA Brief Review

    Directory of Open Access Journals (Sweden)

    Ramya S Vokuda

    2018-01-01

    Full Text Available In this era of modern revolutionisation in the field of medical laboratory technology, everyone is aiming at taking the innovations from laboratory to bed side. One such technique that is most relevant to the pathologic community is Tissue Microarray (TMA technology. This is becoming quite popular amongst all the members of this family, right from laboratory scientists to clinicians and residents to technologists. The reason for this technique to gain popularity is attributed to its cost effectiveness and time saving protocols. Though, every technique is accompanied by disadvantages, the benefits out number them. This technique is very versatile as many downstream molecular assays such as immunohistochemistry, cytogenetic studies, Fluorescent In situ-Hybridisation (FISH etc., can be carried out on a single slide with multiple numbers of samples. It is a very practical approach that aids effectively to identify novel biomarkers in cancer diagnostics and therapeutics. It helps in assessing the molecular markers on a large scale very quickly. Also, the quality assurance protocols in pathological laboratory has exploited TMA to a great extent. However, the application of TMA technology is beyond oncology. This review shall focus on the different aspects of this technology such as construction of TMA, instrumentation, types, advantages and disadvantages and utilisation of the technique in various disease conditions.

  3. Tissue Microarray Analysis Applied to Bone Diagenesis.

    Science.gov (United States)

    Mello, Rafael Barrios; Silva, Maria Regina Regis; Alves, Maria Teresa Seixas; Evison, Martin Paul; Guimarães, Marco Aurelio; Francisco, Rafaella Arrabaca; Astolphi, Rafael Dias; Iwamura, Edna Sadayo Miazato

    2017-01-04

    Taphonomic processes affecting bone post mortem are important in forensic, archaeological and palaeontological investigations. In this study, the application of tissue microarray (TMA) analysis to a sample of femoral bone specimens from 20 exhumed individuals of known period of burial and age at death is described. TMA allows multiplexing of subsamples, permitting standardized comparative analysis of adjacent sections in 3-D and of representative cross-sections of a large number of specimens. Standard hematoxylin and eosin, periodic acid-Schiff and silver methenamine, and picrosirius red staining, and CD31 and CD34 immunohistochemistry were applied to TMA sections. Osteocyte and osteocyte lacuna counts, percent bone matrix loss, and fungal spheroid element counts could be measured and collagen fibre bundles observed in all specimens. Decalcification with 7% nitric acid proceeded more rapidly than with 0.5 M EDTA and may offer better preservation of histological and cellular structure. No endothelial cells could be detected using CD31 and CD34 immunohistochemistry. Correlation between osteocytes per lacuna and age at death may reflect reported age-related responses to microdamage. Methodological limitations and caveats, and results of the TMA analysis of post mortem diagenesis in bone are discussed, and implications for DNA survival and recovery considered.

  4. Transcriptome analysis of zebrafish embryogenesis using microarrays.

    Directory of Open Access Journals (Sweden)

    Sinnakaruppan Mathavan

    2005-08-01

    Full Text Available Zebrafish (Danio rerio is a well-recognized model for the study of vertebrate developmental genetics, yet at the same time little is known about the transcriptional events that underlie zebrafish embryogenesis. Here we have employed microarray analysis to study the temporal activity of developmentally regulated genes during zebrafish embryogenesis. Transcriptome analysis at 12 different embryonic time points covering five different developmental stages (maternal, blastula, gastrula, segmentation, and pharyngula revealed a highly dynamic transcriptional profile. Hierarchical clustering, stage-specific clustering, and algorithms to detect onset and peak of gene expression revealed clearly demarcated transcript clusters with maximum gene activity at distinct developmental stages as well as co-regulated expression of gene groups involved in dedicated functions such as organogenesis. Our study also revealed a previously unidentified cohort of genes that are transcribed prior to the mid-blastula transition, a time point earlier than when the zygotic genome was traditionally thought to become active. Here we provide, for the first time to our knowledge, a comprehensive list of developmentally regulated zebrafish genes and their expression profiles during embryogenesis, including novel information on the temporal expression of several thousand previously uncharacterized genes. The expression data generated from this study are accessible to all interested scientists from our institute resource database (http://giscompute.gis.a-star.edu.sg/~govind/zebrafish/data_download.html.

  5. Dental impressions using 3D digital scanners: virtual becomes reality.

    Science.gov (United States)

    Birnbaum, Nathan S; Aaronson, Heidi B

    2008-10-01

    The technologies that have made the use of three-dimensional (3D) digital scanners an integral part of many industries for decades have been improved and refined for application to dentistry. Since the introduction of the first dental impressioning digital scanner in the 1980s, development engineers at a number of companies have enhanced the technologies and created in-office scanners that are increasingly user-friendly and able to produce precisely fitting dental restorations. These systems are capable of capturing 3D virtual images of tooth preparations, from which restorations may be fabricated directly (ie, CAD/CAM systems) or fabricated indirectly (ie, dedicated impression scanning systems for the creation of accurate master models). The use of these products is increasing rapidly around the world and presents a paradigm shift in the way in which dental impressions are made. Several of the leading 3D dental digital scanning systems are presented and discussed in this article.

  6. Cell-Based Microarrays for In Vitro Toxicology

    Science.gov (United States)

    Wegener, Joachim

    2015-07-01

    DNA/RNA and protein microarrays have proven their outstanding bioanalytical performance throughout the past decades, given the unprecedented level of parallelization by which molecular recognition assays can be performed and analyzed. Cell microarrays (CMAs) make use of similar construction principles. They are applied to profile a given cell population with respect to the expression of specific molecular markers and also to measure functional cell responses to drugs and chemicals. This review focuses on the use of cell-based microarrays for assessing the cytotoxicity of drugs, toxins, or chemicals in general. It also summarizes CMA construction principles with respect to the cell types that are used for such microarrays, the readout parameters to assess toxicity, and the various formats that have been established and applied. The review ends with a critical comparison of CMAs and well-established microtiter plate (MTP) approaches.

  7. High throughput screening of starch structures using carbohydrate microarrays

    DEFF Research Database (Denmark)

    Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg

    2016-01-01

    In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated...

  8. Microarray of DNA probes on carboxylate functional beads surface

    Institute of Scientific and Technical Information of China (English)

    黄承志; 李原芳; 黄新华; 范美坤

    2000-01-01

    The microarray of DNA probes with 5’ -NH2 and 5’ -Tex/3’ -NH2 modified terminus on 10 um carboxylate functional beads surface in the presence of 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide (EDC) is characterized in the preseni paper. it was found that the microarray capacity of DNA probes on the beads surface depends on the pH of the aqueous solution, the concentra-tion of DNA probe and the total surface area of the beads. On optimal conditions, the minimum distance of 20 mer single-stranded DNA probe microarrayed on beads surface is about 14 nm, while that of 20 mer double-stranded DNA probes is about 27 nm. If the probe length increases from 20 mer to 35 mer, its microarray density decreases correspondingly. Mechanism study shows that the binding mode of DNA probes on the beads surface is nearly parallel to the beads surface.

  9. Microarray of DNA probes on carboxylate functional beads surface

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The microarray of DNA probes with 5′-NH2 and 5′-Tex/3′-NH2 modified terminus on 10 m m carboxylate functional beads surface in the presence of 1-ethyl-3-(3-dimethylaminopropyl)- carbodiimide (EDC) is characterized in the present paper. It was found that the microarray capacity of DNA probes on the beads surface depends on the pH of the aqueous solution, the concentration of DNA probe and the total surface area of the beads. On optimal conditions, the minimum distance of 20 mer single-stranded DNA probe microarrayed on beads surface is about 14 nm, while that of 20 mer double-stranded DNA probes is about 27 nm. If the probe length increases from 20 mer to 35 mer, its microarray density decreases correspondingly. Mechanism study shows that the binding mode of DNA probes on the beads surface is nearly parallel to the beads surface.

  10. Rapid Diagnosis of Bacterial Meningitis Using a Microarray

    Directory of Open Access Journals (Sweden)

    Ren-Jy Ben

    2008-06-01

    Conclusion: The microarray method provides a more accurate and rapid diagnostic tool for bacterial meningitis compared to traditional culture methods. Clinical application of this new technique may reduce the potential risk of delay in treatment.

  11. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J.

    2009-01-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing

  12. Novel Protein Microarray Technology to Examine Men with Prostate Cancer

    National Research Council Canada - National Science Library

    Lilja, Hans

    2005-01-01

    The authors developed a novel macro and nanoporous silicon surface for protein microarrays to facilitate high-throughput biomarker discovery, and high-density protein-chip array analyses of complex biological samples...

  13. The Hyperspectral Imager for the Coastal Ocean (HICO (trademark)) Provides a New View of the Coastal Ocean

    Science.gov (United States)

    2012-02-09

    The calibrated data are then sent to NRL Stennis Space Center (NRL-SSC) for further processing using the NRL SSC Automated Processing System (APS...hyperspectral sensor in space we have not previously developed automated processing for hyperspectral ocean color data. The hyperspectral processing branch

  14. Support Vector Machines for Hyperspectral Remote Sensing Classification

    Science.gov (United States)

    Gualtieri, J. Anthony; Cromp, R. F.

    1998-01-01

    The Support Vector Machine provides a new way to design classification algorithms which learn from examples (supervised learning) and generalize when applied to new data. We demonstrate its success on a difficult classification problem from hyperspectral remote sensing, where we obtain performances of 96%, and 87% correct for a 4 class problem, and a 16 class problem respectively. These results are somewhat better than other recent results on the same data. A key feature of this classifier is its ability to use high-dimensional data without the usual recourse to a feature selection step to reduce the dimensionality of the data. For this application, this is important, as hyperspectral data consists of several hundred contiguous spectral channels for each exemplar. We provide an introduction to this new approach, and demonstrate its application to classification of an agriculture scene.

  15. Hyperspectral optical imaging of two different species of lepidoptera

    Directory of Open Access Journals (Sweden)

    Vukusic Pete

    2011-01-01

    Full Text Available Abstract In this article, we report a hyperspectral optical imaging application for measurement of the reflectance spectra of photonic structures that produce structural colors with high spatial resolution. The measurement of the spectral reflectance function is exemplified in the butterfly wings of two different species of Lepidoptera: the blue iridescence reflected by the nymphalid Morpho didius and the green iridescence of the papilionid Papilio palinurus. Color coordinates from reflectance spectra were calculated taking into account human spectral sensitivity. For each butterfly wing, the observed color is described by a characteristic color map in the chromaticity diagram and spreads over a limited volume in the color space. The results suggest that variability in the reflectance spectra is correlated with different random arrangements in the spatial distribution of the scales that cover the wing membranes. Hyperspectral optical imaging opens new ways for the non-invasive study and classification of different forms of irregularity in structural colors.

  16. Camouflage target detection via hyperspectral imaging plus information divergence measurement

    Science.gov (United States)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Ji, Yiqun; Shen, Weimin

    2016-01-01

    Target detection is one of most important applications in remote sensing. Nowadays accurate camouflage target distinction is often resorted to spectral imaging technique due to its high-resolution spectral/spatial information acquisition ability as well as plenty of data processing methods. In this paper, hyper-spectral imaging technique together with spectral information divergence measure method is used to solve camouflage target detection problem. A self-developed visual-band hyper-spectral imaging device is adopted to collect data cubes of certain experimental scene before spectral information divergences are worked out so as to discriminate target camouflage and anomaly. Full-band information divergences are measured to evaluate target detection effect visually and quantitatively. Information divergence measurement is proved to be a low-cost and effective tool for target detection task and can be further developed to other target detection applications beyond spectral imaging technique.

  17. Overhead longwave infrared hyperspectral material identification using radiometric models

    Energy Technology Data Exchange (ETDEWEB)

    Zelinski, M. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2018-01-09

    Material detection algorithms used in hyperspectral data processing are computationally efficient but can produce relatively high numbers of false positives. Material identification performed as a secondary processing step on detected pixels can help separate true and false positives. This paper presents a material identification processing chain for longwave infrared hyperspectral data of solid materials collected from airborne platforms. The algorithms utilize unwhitened radiance data and an iterative algorithm that determines the temperature, humidity, and ozone of the atmospheric profile. Pixel unmixing is done using constrained linear regression and Bayesian Information Criteria for model selection. The resulting product includes an optimal atmospheric profile and full radiance material model that includes material temperature, abundance values, and several fit statistics. A logistic regression method utilizing all model parameters to improve identification is also presented. This paper details the processing chain and provides justification for the algorithms used. Several examples are provided using modeled data at different noise levels.

  18. A hyperspectral image analysis workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-10-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or ``hyperspectral`` imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne`s Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image ``texture spectra`` derived from fractal signatures computed for subimage tiles at each wavelength.

  19. A hyperspectral image analysis workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-01-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or hyperspectral'' imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne's Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image texture spectra'' derived from fractal signatures computed for subimage tiles at each wavelength.

  20. High-throughput optical system for HDES hyperspectral imager

    Science.gov (United States)

    Václavík, Jan; Melich, Radek; Pintr, Pavel; Pleštil, Jan

    2015-01-01

    Affordable, long-wave infrared hyperspectral imaging calls for use of an uncooled FPA with high-throughput optics. This paper describes the design of the optical part of a stationary hyperspectral imager in a spectral range of 7-14 um with a field of view of 20°×10°. The imager employs a push-broom method made by a scanning mirror. High throughput and a demand for simplicity and rigidity led to a fully refractive design with highly aspheric surfaces and off-axis positioning of the detector array. The design was optimized to exploit the machinability of infrared materials by the SPDT method and a simple assemblage.

  1. Universal Reference RNA as a standard for microarray experiments

    Directory of Open Access Journals (Sweden)

    Fero Michael

    2004-03-01

    Full Text Available Abstract Background Obtaining reliable and reproducible two-color microarray gene expression data is critically important for understanding the biological significance of perturbations made on a cellular system. Microarray design, RNA preparation and labeling, hybridization conditions and data acquisition and analysis are variables difficult to simultaneously control. A useful tool for monitoring and controlling intra- and inter-experimental variation is Universal Reference RNA (URR, developed with the goal of providing hybridization signal at each microarray probe location (spot. Measuring signal at each spot as the ratio of experimental RNA to reference RNA targets, rather than relying on absolute signal intensity, decreases variability by normalizing signal output in any two-color hybridization experiment. Results Human, mouse and rat URR (UHRR, UMRR and URRR, respectively were prepared from pools of RNA derived from individual cell lines representing different tissues. A variety of microarrays were used to determine percentage of spots hybridizing with URR and producing signal above a user defined threshold (microarray coverage. Microarray coverage was consistently greater than 80% for all arrays tested. We confirmed that individual cell lines contribute their own unique set of genes to URR, arguing for a pool of RNA from several cell lines as a better configuration for URR as opposed to a single cell line source for URR. Microarray coverage comparing two separately prepared batches each of UHRR, UMRR and URRR were highly correlated (Pearson's correlation coefficients of 0.97. Conclusion Results of this study demonstrate that large quantities of pooled RNA from individual cell lines are reproducibly prepared and possess diverse gene representation. This type of reference provides a standard for reducing variation in microarray experiments and allows more reliable comparison of gene expression data within and between experiments and

  2. Addressable droplet microarrays for single cell protein analysis.

    Science.gov (United States)

    Salehi-Reyhani, Ali; Burgin, Edward; Ces, Oscar; Willison, Keith R; Klug, David R

    2014-11-07

    Addressable droplet microarrays are potentially attractive as a way to achieve miniaturised, reduced volume, high sensitivity analyses without the need to fabricate microfluidic devices or small volume chambers. We report a practical method for producing oil-encapsulated addressable droplet microarrays which can be used for such analyses. To demonstrate their utility, we undertake a series of single cell analyses, to determine the variation in copy number of p53 proteins in cells of a human cancer cell line.

  3. Microarrays for Universal Detection and Identification of Phytoplasmas

    DEFF Research Database (Denmark)

    Nicolaisen, Mogens; Nyskjold, Henriette; Bertaccini, Assunta

    2013-01-01

    Detection and identification of phytoplasmas is a laborious process often involving nested PCR followed by restriction enzyme analysis and fine-resolution gel electrophoresis. To improve throughput, other methods are needed. Microarray technology offers a generic assay that can potentially detect...... and differentiate all types of phytoplasmas in one assay. The present protocol describes a microarray-based method for identification of phytoplasmas to 16Sr group level....

  4. Emerging use of gene expression microarrays in plant physiology.

    Science.gov (United States)

    Wullschleger, Stan D; Difazio, Stephen P

    2003-01-01

    Microarrays have become an important technology for the global analysis of gene expression in humans, animals, plants, and microbes. Implemented in the context of a well-designed experiment, cDNA and oligonucleotide arrays can provide highthroughput, simultaneous analysis of transcript abundance for hundreds, if not thousands, of genes. However, despite widespread acceptance, the use of microarrays as a tool to better understand processes of interest to the plant physiologist is still being explored. To help illustrate current uses of microarrays in the plant sciences, several case studies that we believe demonstrate the emerging application of gene expression arrays in plant physiology were selected from among the many posters and presentations at the 2003 Plant and Animal Genome XI Conference. Based on this survey, microarrays are being used to assess gene expression in plants exposed to the experimental manipulation of air temperature, soil water content and aluminium concentration in the root zone. Analysis often includes characterizing transcript profiles for multiple post-treatment sampling periods and categorizing genes with common patterns of response using hierarchical clustering techniques. In addition, microarrays are also providing insights into developmental changes in gene expression associated with fibre and root elongation in cotton and maize, respectively. Technical and analytical limitations of microarrays are discussed and projects attempting to advance areas of microarray design and data analysis are highlighted. Finally, although much work remains, we conclude that microarrays are a valuable tool for the plant physiologist interested in the characterization and identification of individual genes and gene families with potential application in the fields of agriculture, horticulture and forestry.

  5. Emerging Use of Gene Expression Microarrays in Plant Physiology

    Directory of Open Access Journals (Sweden)

    Stephen P. Difazio

    2006-04-01

    Full Text Available Microarrays have become an important technology for the global analysis of gene expression in humans, animals, plants, and microbes. Implemented in the context of a well-designed experiment, cDNA and oligonucleotide arrays can provide highthroughput, simultaneous analysis of transcript abundance for hundreds, if not thousands, of genes. However, despite widespread acceptance, the use of microarrays as a tool to better understand processes of interest to the plant physiologist is still being explored. To help illustrate current uses of microarrays in the plant sciences, several case studies that we believe demonstrate the emerging application of gene expression arrays in plant physiology were selected from among the many posters and presentations at the 2003 Plant and Animal Genome XI Conference. Based on this survey, microarrays are being used to assess gene expression in plants exposed to the experimental manipulation of air temperature, soil water content and aluminium concentration in the root zone. Analysis often includes characterizing transcript profiles for multiple post-treatment sampling periods and categorizing genes with common patterns of response using hierarchical clustering techniques. In addition, microarrays are also providing insights into developmental changes in gene expression associated with fibre and root elongation in cotton and maize, respectively. Technical and analytical limitations of microarrays are discussed and projects attempting to advance areas of microarray design and data analysis are highlighted. Finally, although much work remains, we conclude that microarrays are a valuable tool for the plant physiologist interested in the characterization and identification of individual genes and gene families with potential application in the fields of agriculture, horticulture and forestry.

  6. Plant-pathogen interactions: what microarray tells about it?

    Science.gov (United States)

    Lodha, T D; Basak, J

    2012-01-01

    Plant defense responses are mediated by elementary regulatory proteins that affect expression of thousands of genes. Over the last decade, microarray technology has played a key role in deciphering the underlying networks of gene regulation in plants that lead to a wide variety of defence responses. Microarray is an important tool to quantify and profile the expression of thousands of genes simultaneously, with two main aims: (1) gene discovery and (2) global expression profiling. Several microarray technologies are currently in use; most include a glass slide platform with spotted cDNA or oligonucleotides. Till date, microarray technology has been used in the identification of regulatory genes, end-point defence genes, to understand the signal transduction processes underlying disease resistance and its intimate links to other physiological pathways. Microarray technology can be used for in-depth, simultaneous profiling of host/pathogen genes as the disease progresses from infection to resistance/susceptibility at different developmental stages of the host, which can be done in different environments, for clearer understanding of the processes involved. A thorough knowledge of plant disease resistance using successful combination of microarray and other high throughput techniques, as well as biochemical, genetic, and cell biological experiments is needed for practical application to secure and stabilize yield of many crop plants. This review starts with a brief introduction to microarray technology, followed by the basics of plant-pathogen interaction, the use of DNA microarrays over the last decade to unravel the mysteries of plant-pathogen interaction, and ends with the future prospects of this technology.

  7. Free-space wavelength-multiplexed optical scanner.

    Science.gov (United States)

    Yaqoob, Z; Rizvi, A A; Riza, N A

    2001-12-10

    A wavelength-multiplexed optical scanning scheme is proposed for deflecting a free-space optical beam by selection of the wavelength of the light incident on a wavelength-dispersive optical element. With fast tunable lasers or optical filters, this scanner features microsecond domain scan setting speeds and large- diameter apertures of several centimeters or more for subdegree angular scans. Analysis performed indicates an optimum scan range for a given diffraction order and grating period. Limitations include beam-spreading effects based on the varying scanner aperture sizes and the instantaneous information bandwidth of the data-carrying laser beam.

  8. Cyclone: A laser scanner for mobile robot navigation

    Science.gov (United States)

    Singh, Sanjiv; West, Jay

    1991-09-01

    Researchers at Carnegie Mellon's Field Robotics Center have designed and implemented a scanning laser rangefinder. The device uses a commercially available time-of-flight ranging instrument that is capable of making up to 7200 measurements per second. The laser beam is reflected by a rotating mirror, producing up to a 360 degree view. Mounted on a robot vehicle, the scanner can be used to detect obstacles in the vehicle's path or to locate the robot on a map. This report discusses the motivation, design, and some applications of the scanner.

  9. Scanner baseliner monitoring and control in high volume manufacturing

    Science.gov (United States)

    Samudrala, Pavan; Chung, Woong Jae; Aung, Nyan; Subramany, Lokesh; Gao, Haiyong; Gomez, Juan-Manuel

    2016-03-01

    We analyze performance of different customized models on baseliner overlay data and demonstrate the reduction in overlay residuals by ~10%. Smart Sampling sets were assessed and compared with the full wafer measurements. We found that performance of the grid can still be maintained by going to one-third of total sampling points, while reducing metrology time by 60%. We also demonstrate the feasibility of achieving time to time matching using scanner fleet manager and thus identify the tool drifts even when the tool monitoring controls are within spec limits. We also explore the scanner feedback constant variation with illumination sources.

  10. Development of the Shimadzu computed tomographic scanner SCT-200N

    International Nuclear Information System (INIS)

    Ishihara, Hiroshi; Yamaoka, Nobuyuki; Saito, Masahiro

    1982-01-01

    The Shimadzu Computed Tomographic Scanner SCT-200N has been developed as an ideal CT scanner for diagnosing the head and spine. Due to the large aperture, moderate scan time and the Zoom Scan Mode, any part of the body can be scanned. High quality image can be obtained by adopting the precisely stabilized X-ray unit and densely packed array of 64-detectors. As for its operation, capability of computed radiography (CR) prior to patient positioning and real time reconstruction ensure efficient patient through-put. Details of the SCT-200N are described in this paper. (author)

  11. Isocount scintillation scanner with preset statistical data reliability

    International Nuclear Information System (INIS)

    Ikebe, J.; Yamaguchi, H.; Nawa, O.A.

    1975-01-01

    A scintillation detector scans an object such as a live body along horizontal straight scanning lines in such a manner that the scintillation detector is stopped at a scanning point during the time interval T required for counting a predetermined number of N pulses. The rate R/sub N/ = N/T is then calculated and the output signal pulses the number of which represents the rate R or the corresponding output signal is used as the recording signal for forming the scintigram. In contrast to the usual scanner, the isocount scanner scans an object stepwise in order to gather data with statistically uniform reliability

  12. Hyperspectral Based Skin Detection for Person of Interest Identification

    Science.gov (United States)

    2015-03-01

    short-wave infrared VIS visible spectrum PCA principal component analysis FCBF Fast Correlation- Based Filter POI person of interest ANN artificial neural...an artificial neural network (ANN) that is created in MATLAB® using the Neural Network Toolbox to identify a POI based on their skin spectral data. A...identifying a POI based on skin spectral data. She identified an optimal feature subset to be used with the hyperspectral data she collected using a

  13. Panchromatic cooperative hyperspectral adaptive wide band deletion repair method

    Science.gov (United States)

    Jiang, Bitao; Shi, Chunyu

    2018-02-01

    In the hyperspectral data, the phenomenon of stripe deletion often occurs, which seriously affects the efficiency and accuracy of data analysis and application. Narrow band deletion can be directly repaired by interpolation, and this method is not ideal for wide band deletion repair. In this paper, an adaptive spectral wide band missing restoration method based on panchromatic information is proposed, and the effectiveness of the algorithm is verified by experiments.

  14. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  15. Hyperspectral imaging as a diagnostic tool for chronic skin ulcers

    Science.gov (United States)

    Denstedt, Martin; Pukstad, Brita S.; Paluchowski, Lukasz A.; Hernandez-Palacios, Julio E.; Randeberg, Lise L.

    2013-03-01

    The healing process of chronic wounds is complex, and the complete pathogenesis is not known. Diagnosis is currently based on visual inspection, biopsies and collection of samples from the wound surface. This is often time consuming, expensive and to some extent subjective procedures. Hyperspectral imaging has been shown to be a promising modality for optical diagnostics. The main objective of this study was to identify a suitable technique for reproducible classification of hyperspectral data from a wound and the surrounding tissue. Two statistical classification methods have been tested and compared to the performance of a dermatologist. Hyperspectral images (400-1000 nm) were collected from patients with venous leg ulcers using a pushbroom-scanning camera (VNIR 1600, Norsk Elektro Optikk AS).Wounds were examined regularly over 4 - 6 weeks. The patients were evaluated by a dermatologist at every appointment. One patient has been selected for presentation in this paper (female, age 53 years). The oxygen saturation of the wound area was determined by wavelength ratio metrics. Spectral angle mapping (SAM) and k-means clustering were used for classification. Automatic extraction of endmember spectra was employed to minimize human interaction. A comparison of the methods shows that k-means clustering is the most stable method over time, and shows the best overlap with the dermatologist's assessment of the wound border. The results are assumed to be affected by the data preprocessing and chosen endmember extraction algorithm. Results indicate that it is possible to develop an automated method for reliable classification of wounds based on hyperspectral data.

  16. Diffusion Geometry Based Nonlinear Methods for Hyperspectral Change Detection

    Science.gov (United States)

    2010-05-12

    for matching biological spectra across a data base of hyperspectral pathology slides acquires with different instruments in different conditions, as...generalizing wavelets and similar scaling mechanisms. Plain Sight Systems, Inc. -7- Proprietary and Confidential To be specific, let the bi-Markov...remarkably well. Conventional nearest neighbor search , compared with a diffusion search. The data is a pathology slide ,each pixel is a digital

  17. Hyperspectral Imagery for Large Area Survey of Organophosphate Pesticides

    Science.gov (United States)

    2015-03-26

    When the molecule is exposed to infrared radiation , the energy of these frequencies are absorbed. This is what created the unique spectral...medium. It has been used in the fields of food safety and quality, pharmaceuticals, and medical diagnostics and has potential for non-contact...forensic analysis. Hyperspectral imaging can be used for various parts of the electromagnetic spectrum (e.g. ultraviolet , visible, and infrared) (Edelmen

  18. Fast algorithm for exploring and compressing of large hyperspectral images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    A new method for calculation of latent variable space for exploratory analysis and dimension reduction of large hyperspectral images is proposed. The method is based on significant downsampling of image pixels with preservation of pixels’ structure in feature (variable) space. To achieve this, in...... can be used first of all for fast compression of large data arrays with principal component analysis or similar projection techniques....

  19. A hyperspectral image data exploration workbench for environmental science applications

    International Nuclear Information System (INIS)

    Woyna, M.A.; Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.

    1994-01-01

    The Hyperspectral Image Data Exploration Workbench (HIDEW) software system has been developed by Argonne National Laboratory to enable analysts at Unix workstations to conveniently access and manipulate high-resolution imagery data for analysis, mapping purposes, and input to environmental modeling applications. HIDEW is fully object-oriented, including the underlying database. This system was developed as an aid to site characterization work and atmospheric research projects

  20. Statistical Modeling of Natural Backgrounds in Hyperspectral LWIR Data

    Science.gov (United States)

    2016-09-06

    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 7(6), 2337–2350 (2014). [4] Manolakis, D., Rossacci, M., Zhang, D... IEEE Signal Processing Magazine 31(1), 24–33 (2014). [2] Manolakis, D., Golowich, S., and DiPietro, R., “Long-wave infrared hyperspectral remote ... sensing of chemical clouds: A focus on signal processing approaches,” IEEE Signal Processing Magazine 31(4), 120–141 (2014). [3] Truslow, E.,

  1. A hyperspectral image data exploration workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Woyna, M.A.; Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.

    1994-08-01

    The Hyperspectral Image Data Exploration Workbench (HIDEW) software system has been developed by Argonne National Laboratory to enable analysts at Unix workstations to conveniently access and manipulate high-resolution imagery data for analysis, mapping purposes, and input to environmental modeling applications. HIDEW is fully object-oriented, including the underlying database. This system was developed as an aid to site characterization work and atmospheric research projects.

  2. Protein microarray: sensitive and effective immunodetection for drug residues

    Directory of Open Access Journals (Sweden)

    Zer Cindy

    2010-02-01

    Full Text Available Abstract Background Veterinary drugs such as clenbuterol (CL and sulfamethazine (SM2 are low molecular weight ( Results The artificial antigens were spotted on microarray slides. Standard concentrations of the compounds were added to compete with the spotted antigens for binding to the antisera to determine the IC50. Our microarray assay showed the IC50 were 39.6 ng/ml for CL and 48.8 ng/ml for SM2, while the traditional competitive indirect-ELISA (ci-ELISA showed the IC50 were 190.7 ng/ml for CL and 156.7 ng/ml for SM2. We further validated the two methods with CL fortified chicken muscle tissues, and the protein microarray assay showed 90% recovery while the ci-ELISA had 76% recovery rate. When tested with CL-fed chicken muscle tissues, the protein microarray assay had higher sensitivity (0.9 ng/g than the ci-ELISA (0.1 ng/g for detection of CL residues. Conclusions The protein microarrays showed 4.5 and 3.5 times lower IC50 than the ci-ELISA detection for CL and SM2, respectively, suggesting that immunodetection of small molecules with protein microarray is a better approach than the traditional ELISA technique.

  3. A comparative analysis of DNA barcode microarray feature size

    Directory of Open Access Journals (Sweden)

    Smith Andrew M

    2009-10-01

    Full Text Available Abstract Background Microarrays are an invaluable tool in many modern genomic studies. It is generally perceived that decreasing the size of microarray features leads to arrays with higher resolution (due to greater feature density, but this increase in resolution can compromise sensitivity. Results We demonstrate that barcode microarrays with smaller features are equally capable of detecting variation in DNA barcode intensity when compared to larger feature sizes within a specific microarray platform. The barcodes used in this study are the well-characterized set derived from the Yeast KnockOut (YKO collection used for screens of pooled yeast (Saccharomyces cerevisiae deletion mutants. We treated these pools with the glycosylation inhibitor tunicamycin as a test compound. Three generations of barcode microarrays at 30, 8 and 5 μm features sizes independently identified the primary target of tunicamycin to be ALG7. Conclusion We show that the data obtained with 5 μm feature size is of comparable quality to the 30 μm size and propose that further shrinking of features could yield barcode microarrays with equal or greater resolving power and, more importantly, higher density.

  4. Assessing Bacterial Interactions Using Carbohydrate-Based Microarrays

    Directory of Open Access Journals (Sweden)

    Andrea Flannery

    2015-12-01

    Full Text Available Carbohydrates play a crucial role in host-microorganism interactions and many host glycoconjugates are receptors or co-receptors for microbial binding. Host glycosylation varies with species and location in the body, and this contributes to species specificity and tropism of commensal and pathogenic bacteria. Additionally, bacterial glycosylation is often the first bacterial molecular species encountered and responded to by the host system. Accordingly, characterising and identifying the exact structures involved in these critical interactions is an important priority in deciphering microbial pathogenesis. Carbohydrate-based microarray platforms have been an underused tool for screening bacterial interactions with specific carbohydrate structures, but they are growing in popularity in recent years. In this review, we discuss carbohydrate-based microarrays that have been profiled with whole bacteria, recombinantly expressed adhesins or serum antibodies. Three main types of carbohydrate-based microarray platform are considered; (i conventional carbohydrate or glycan microarrays; (ii whole mucin microarrays; and (iii microarrays constructed from bacterial polysaccharides or their components. Determining the nature of the interactions between bacteria and host can help clarify the molecular mechanisms of carbohydrate-mediated interactions in microbial pathogenesis, infectious disease and host immune response and may lead to new strategies to boost therapeutic treatments.

  5. AN EXTENDED SPECTRAL–SPATIAL CLASSIFICATION APPROACH FOR HYPERSPECTRAL DATA

    Directory of Open Access Journals (Sweden)

    D. Akbari

    2017-11-01

    Full Text Available In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1 unsupervised feature extraction methods including principal component analysis (PCA, independent component analysis (ICA, and minimum noise fraction (MNF; (2 supervised feature extraction including decision boundary feature extraction (DBFE, discriminate analysis feature extraction (DAFE, and nonparametric weighted feature extraction (NWFE; (3 genetic algorithm (GA. The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  6. Hyperspectral laser-induced autofluorescence imaging of dental caries

    Science.gov (United States)

    Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2012-01-01

    Dental caries is a disease characterized by demineralization of enamel crystals leading to the penetration of bacteria into the dentine and pulp. Early detection of enamel demineralization resulting in increased enamel porosity, commonly known as white spots, is a difficult diagnostic task. Laser induced autofluorescence was shown to be a useful method for early detection of demineralization. The existing studies involved either a single point spectroscopic measurements or imaging at a single spectral band. In the case of spectroscopic measurements, very little or no spatial information is acquired and the measured autofluorescence signal strongly depends on the position and orientation of the probe. On the other hand, single-band spectral imaging can be substantially affected by local spectral artefacts. Such effects can significantly interfere with automated methods for detection of early caries lesions. In contrast, hyperspectral imaging effectively combines the spatial information of imaging methods with the spectral information of spectroscopic methods providing excellent basis for development of robust and reliable algorithms for automated classification and analysis of hard dental tissues. In this paper, we employ 405 nm laser excitation of natural caries lesions. The fluorescence signal is acquired by a state-of-the-art hyperspectral imaging system consisting of a high-resolution acousto-optic tunable filter (AOTF) and a highly sensitive Scientific CMOS camera in the spectral range from 550 nm to 800 nm. The results are compared to the contrast obtained by near-infrared hyperspectral imaging technique employed in the existing studies on early detection of dental caries.

  7. Reconstruction of hyperspectral image using matting model for classification

    Science.gov (United States)

    Xie, Weiying; Li, Yunsong; Ge, Chiru

    2016-05-01

    Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.

  8. On the Atmospheric Correction of Antarctic Airborne Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Martin Black

    2014-05-01

    Full Text Available The first airborne hyperspectral campaign in the Antarctic Peninsula region was carried out by the British Antarctic Survey and partners in February 2011. This paper presents an insight into the applicability of currently available radiative transfer modelling and atmospheric correction techniques for processing airborne hyperspectral data in this unique coastal Antarctic environment. Results from the Atmospheric and Topographic Correction version 4 (ATCOR-4 package reveal absolute reflectance values somewhat in line with laboratory measured spectra, with Root Mean Square Error (RMSE values of 5% in the visible near infrared (0.4–1 µm and 8% in the shortwave infrared (1–2.5 µm. Residual noise remains present due to the absorption by atmospheric gases and aerosols, but certain parts of the spectrum match laboratory measured features very well. This study demonstrates that commercially available packages for carrying out atmospheric correction are capable of correcting airborne hyperspectral data in the challenging environment present in Antarctica. However, it is anticipated that future results from atmospheric correction could be improved by measuring in situ atmospheric data to generate atmospheric profiles and aerosol models, or with the use of multiple ground targets for calibration and validation.

  9. SIBI: A compact hyperspectral camera in the mid-infrared

    Science.gov (United States)

    Pola Fossi, Armande; Ferrec, Yann; Domel, Roland; Coudrain, Christophe; Guerineau, Nicolas; Roux, Nicolas; D'Almeida, Oscar; Bousquet, Marc; Kling, Emmanuel; Sauer, Hervé

    2015-10-01

    Recent developments in unmanned aerial vehicles have increased the demand for more and more compact optical systems. In order to bring solutions to this demand, several infrared systems are being developed at ONERA such as spectrometers, imaging devices, multispectral and hyperspectral imaging systems. In the field of compact infrared hyperspectral imaging devices, ONERA and Sagem Défense et Sécurité have collaborated to develop a prototype called SIBI, which stands for "Spectro-Imageur Birefringent Infrarouge". It is a static Fourier transform imaging spectrometer which operates in the mid-wavelength infrared spectral range and uses a birefringent lateral shearing interferometer. Up to now, birefringent interferometers have not been often used for hyperspectral imaging in the mid-infrared because of the lack of crystal manufacturers, contrary to the visible spectral domain where the production of uniaxial crystals like calcite are mastered for various optical applications. In the following, we will present the design and the realization of SIBI as well as the first experimental results.

  10. A COMPARISON OF LIDAR REFLECTANCE AND RADIOMETRICALLY CALIBRATED HYPERSPECTRAL IMAGERY

    Directory of Open Access Journals (Sweden)

    A. Roncat

    2016-06-01

    Full Text Available In order to retrieve results comparable under different flight parameters and among different flight campaigns, passive remote sensing data such as hyperspectral imagery need to undergo a radiometric calibration. While this calibration, aiming at the derivation of physically meaningful surface attributes such as a reflectance value, is quite cumbersome for passively sensed data and relies on a number of external parameters, the situation is by far less complicated for active remote sensing techniques such as lidar. This fact motivates the investigation of the suitability of full-waveform lidar as a “single-wavelength reflectometer” to support radiometric calibration of hyperspectral imagery. In this paper, this suitability was investigated by means of an airborne hyperspectral imagery campaign and an airborne lidar campaign recorded over the same area. Criteria are given to assess diffuse reflectance behaviour; the distribution of reflectance derived by the two techniques were found comparable in four test areas where these criteria were met. This is a promising result especially in the context of current developments of multi-spectral lidar systems.

  11. Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging.

    Science.gov (United States)

    Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S; Cho, Hyunjeong; Cho, Byoung-Kwan

    2015-11-20

    Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400-1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557-701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce.

  12. [Analysis of related factors of slope plant hyperspectral remote sensing].

    Science.gov (United States)

    Sun, Wei-Qi; Zhao, Yun-Sheng; Tu, Lin-Ling

    2014-09-01

    In the present paper, the slope gradient, aspect, detection zenith angle and plant types were analyzed. In order to strengthen the theoretical discussion, the research was under laboratory condition, and modeled uniform slope for slope plant. Through experiments we found that these factors indeed have influence on plant hyperspectral remote sensing. When choosing slope gradient as the variate, the blade reflection first increases and then decreases as the slope gradient changes from 0° to 36°; When keeping other factors constant, and only detection zenith angle increasing from 0° to 60°, the spectral characteristic of slope plants do not change significantly in visible light band, but decreases gradually in near infrared band; With only slope aspect changing, when the dome meets the light direction, the blade reflectance gets maximum, and when the dome meets the backlit direction, the blade reflectance gets minimum, furthermore, setting the line of vertical intersection of incidence plane and the dome as an axis, the reflectance on the axis's both sides shows symmetric distribution; In addition, spectral curves of different plant types have a lot differences between each other, which means that the plant types also affect hyperspectral remote sensing results of slope plants. This research breaks through the limitations of the traditional vertical remote sensing data collection and uses the multi-angle and hyperspectral information to analyze spectral characteristics of slope plants. So this research has theoretical significance to the development of quantitative remote sensing, and has application value to the plant remote sensing monitoring.

  13. An Extended Spectral-Spatial Classification Approach for Hyperspectral Data

    Science.gov (United States)

    Akbari, D.

    2017-11-01

    In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  14. Rapid hyperspectral image classification to enable autonomous search systems

    Directory of Open Access Journals (Sweden)

    Raj Bridgelal

    2016-11-01

    Full Text Available The emergence of lightweight full-frame hyperspectral cameras is destined to enable autonomous search vehicles in the air, on the ground and in water. Self-contained and long-endurance systems will yield important new applications, for example, in emergency response and the timely identification of environmental hazards. One missing capability is rapid classification of hyperspectral scenes so that search vehicles can immediately take actions to verify potential targets. Onsite verifications minimise false positives and preclude the expense of repeat missions. Verifications will require enhanced image quality, which is achievable by either moving closer to the potential target or by adjusting the optical system. Such a solution, however, is currently impractical for small mobile platforms with finite energy sources. Rapid classifications with current methods demand large computing capacity that will quickly deplete the on-board battery or fuel. To develop the missing capability, the authors propose a low-complexity hyperspectral image classifier that approaches the performance of prevalent classifiers. This research determines that the new method will require at least 19-fold less computing capacity than the prevalent classifier. To assess relative performances, the authors developed a benchmark that compares a statistic of library endmember separability in their respective feature spaces.

  15. Classification of Hyperspectral Images Using Kernel Fully Constrained Least Squares

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-11-01

    Full Text Available As a widely used classifier, sparse representation classification (SRC has shown its good performance for hyperspectral image classification. Recent works have highlighted that it is the collaborative representation mechanism under SRC that makes SRC a highly effective technique for classification purposes. If the dimensionality and the discrimination capacity of a test pixel is high, other norms (e.g., ℓ 2 -norm can be used to regularize the coding coefficients, except for the sparsity ℓ 1 -norm. In this paper, we show that in the kernel space the nonnegative constraint can also play the same role, and thus suggest the investigation of kernel fully constrained least squares (KFCLS for hyperspectral image classification. Furthermore, in order to improve the classification performance of KFCLS by incorporating spatial-spectral information, we investigate two kinds of spatial-spectral methods using two regularization strategies: (1 the coefficient-level regularization strategy, and (2 the class-level regularization strategy. Experimental results conducted on four real hyperspectral images demonstrate the effectiveness of the proposed KFCLS, and show which way to incorporate spatial-spectral information efficiently in the regularization framework.

  16. GPU Lossless Hyperspectral Data Compression System for Space Applications

    Science.gov (United States)

    Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled

    2012-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.

  17. SVM-based feature extraction and classification of aflatoxin contaminated corn using fluorescence hyperspectral data

    Science.gov (United States)

    Support Vector Machine (SVM) was used in the Genetic Algorithms (GA) process to select and classify a subset of hyperspectral image bands. The method was applied to fluorescence hyperspectral data for the detection of aflatoxin contamination in Aspergillus flavus infected single corn kernels. In the...

  18. A light-weight hyperspectral mapping system for unmanned aerial vehicles - The first results

    NARCIS (Netherlands)

    Suomalainen, Juha; Anders, Niels; Iqbal, Shahzad; Franke, Jappe; Wenting, Philip; Bartholomeus, Harm; Becker, Rolf; Kooistra, Lammert

    2017-01-01

    Research opportunities using UAV remote sensing techniques are limited by the payload of the platform. Therefore small UAV's are typically not suitable for hyperspectral imaging due to the weight of the mapping system. In this research, we are developing a light-weight hyperspectral mapping system

  19. Hyperspectral remote sensing analysis of short rotation woody crops grown with controlled nutrient and irrigation treatments

    Science.gov (United States)

    Jungho Im; John R. Jensen; Mark Coleman; Eric. Nelson

    2009-01-01

    Hyperspectral remote sensing research was conducted to document the biophysical and biochemical characteristics of controlled forest plots subjected to various nutrient and irrigation treatments. The experimental plots were located on the Savannah River Site near Aiken, SC. AISA hyperspectral imagery were analysed using three approaches, including: (1) normalized...

  20. Hyperspectral microscope imaging methods to classify gram-positive and gram-negative foodborne pathogenic bacteria

    Science.gov (United States)

    An acousto-optic tunable filter-based hyperspectral microscope imaging method has potential for identification of foodborne pathogenic bacteria from microcolony rapidly with a single cell level. We have successfully developed the method to acquire quality hyperspectral microscopic images from variou...

  1. Processing OMEGA/Mars Express hyperspectral imagery from radiance-at-sensor to surface reflectance

    NARCIS (Netherlands)

    Bakker, W.H.; Ruitenbeek, F.J.A. van; Werff, H.M.A. van der; Zegers, T.E.; Oosthoek, J.H.P.; Marsh, S.H.; Meer, F.D. van der

    2014-01-01

    OMEGA/Mars Express hyperspectral imagery is an excellent source of data for exploring the surface composition of the planet Mars. Compared to terrestrial hyperspectral imagery, the data are challenging to work with; scene-specific transmission models are lacking, spectral features are shallow making

  2. Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition

    Science.gov (United States)

    Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan

    2018-01-01

    Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.

  3. The feasibility of a scanner-independent technique to estimate organ dose from MDCT scans: Using CTDIvol to account for differences between scanners

    International Nuclear Information System (INIS)

    Turner, Adam C.; Zankl, Maria; DeMarco, John J.; Cagnon, Chris H.; Zhang Di; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; McCollough, Cynthia H.; McNitt-Gray, Michael F.

    2010-01-01

    Purpose: Monte Carlo radiation transport techniques have made it possible to accurately estimate the radiation dose to radiosensitive organs in patient models from scans performed with modern multidetector row computed tomography (MDCT) scanners. However, there is considerable variation in organ doses across scanners, even when similar acquisition conditions are used. The purpose of this study was to investigate the feasibility of a technique to estimate organ doses that would be scanner independent. This was accomplished by assessing the ability of CTDI vol measurements to account for differences in MDCT scanners that lead to organ dose differences. Methods: Monte Carlo simulations of 64-slice MDCT scanners from each of the four major manufacturers were performed. An adult female patient model from the GSF family of voxelized phantoms was used in which all ICRP Publication 103 radiosensitive organs were identified. A 120 kVp, full-body helical scan with a pitch of 1 was simulated for each scanner using similar scan protocols across scanners. From each simulated scan, the radiation dose to each organ was obtained on a per mA s basis (mGy/mA s). In addition, CTDI vol values were obtained from each scanner for the selected scan parameters. Then, to demonstrate the feasibility of generating organ dose estimates from scanner-independent coefficients, the simulated organ dose values resulting from each scanner were normalized by the CTDI vol value for those acquisition conditions. Results: CTDI vol values across scanners showed considerable variation as the coefficient of variation (CoV) across scanners was 34.1%. The simulated patient scans also demonstrated considerable differences in organ dose values, which varied by up to a factor of approximately 2 between some of the scanners. The CoV across scanners for the simulated organ doses ranged from 26.7% (for the adrenals) to 37.7% (for the thyroid), with a mean CoV of 31.5% across all organs. However, when organ doses

  4. Hyperspectral and thermal methodologies applied to landslide monitoring

    Science.gov (United States)

    Vellico, Michela; Sterzai, Paolo; Pietrapertosa, Carla; Mora, Paolo; Berti, Matteo; Corsini, Alessandro; Ronchetti, Francesco; Giannini, Luciano; Vaselli, Orlando

    2010-05-01

    Landslide monitoring is a very actual topic. Landslides are a widespread phenomenon over the European territory and these phenomena have been responsible of huge economic losses. The aim of the WISELAND research project (Integrated Airborne and Wireless Sensor Network systems for Landslide Monitoring), funded by the Italian Government, is to test new monitoring techniques capable to rapidly and successfully characterize large landslides in fine soils. Two active earthflows in the Northern Italian Appenines have been chosen as test sites and investigated: Silla (Bologna Province) and Valoria (Modena Province). The project implies the use of remote sensing methodologies, with particular focus on the joint use of airborne Lidar, hyperspectral and thermal systems. These innovative techniques give promising results, since they allow to detect the principal landslide components and to evaluate the spatial distribution of parameters relevant to landslide dynamics such as surface water content and roughness. In this paper we put the attention on the response of the terrain related to the use of a hyperspectral system and its integration with the complementary information obtained using a thermal sensor. The potentiality of a hyperspectral dataset acquired in the VNIR (Visible Near Infrared field) and of the spectral response of the terrain could be high since they give important information both on the soil and on the vegetation status. Several significant indexes can be calculated, such as NDVI, obtained considering a band in the Red field and a band in the Infrared field; it gives information on the vegetation health and indirectly on the water content of soils. This is a key point that bridges hyperspectral and thermal datasets. Thermal infrared data are closely related to soil moisture, one of the most important parameter affecting surface stability in soil slopes. Effective stresses and shear strength in unsaturated soils are directly related to water content, and

  5. The study of active tectonic based on hyperspectral remote sensing

    Science.gov (United States)

    Cui, J.; Zhang, S.; Zhang, J.; Shen, X.; Ding, R.; Xu, S.

    2017-12-01

    As of the latest technical methods, hyperspectral remote sensing technology has been widely used in each brach of the geosciences. However, it is still a blank for using the hyperspectral remote sensing to study the active structrure. Hyperspectral remote sensing, with high spectral resolution, continuous spectrum, continuous spatial data, low cost, etc, has great potentialities in the areas of stratum division and fault identification. Blind fault identification in plains and invisible fault discrimination in loess strata are the two hot problems in the current active fault research. Thus, the study of active fault based on the hyperspectral technology has great theoretical significance and practical value. Magnetic susceptibility (MS) records could reflect the rhythm alteration of the formation. Previous study shown that MS has correlation with spectral feature. In this study, the Emaokou section, located to the northwest of the town of Huairen, in Shanxi Province, has been chosen for invisible fault study. We collected data from the Emaokou section, including spectral data, hyperspectral image, MS data. MS models based on spectral features were established and applied to the UHD185 image for MS mapping. The results shown that MS map corresponded well to the loess sequences. It can recognize the stratum which can not identity by naked eyes. Invisible fault has been found in this section, which is useful for paleoearthquake analysis. The faults act as the conduit for migration of terrestrial gases, the fault zones, especially the structurally weak zones such as inrtersections or bends of fault, may has different material composition. We take Xiadian fault for study. Several samples cross-fault were collected and these samples were measured by ASD Field Spec 3 spectrometer. Spectral classification method has been used for spectral analysis, we found that the spectrum of the fault zone have four special spectral region(550-580nm, 600-700nm, 700-800nm and 800-900nm

  6. Teach Your Computer to Read: Scanners and Optical Character Recognition.

    Science.gov (United States)

    Marsden, Jim

    1993-01-01

    Desktop scanners can be used with a software technology called optical character recognition (OCR) to convert the text on virtually any paper document into an electronic form. OCR offers educators new flexibility in incorporating text into tests, lesson plans, and other materials. (MLF)

  7. Feature-space transformation improves supervised segmentation across scanners

    DEFF Research Database (Denmark)

    van Opbroek, Annegreet; Achterberg, Hakim C.; de Bruijne, Marleen

    2015-01-01

    Image-segmentation techniques based on supervised classification generally perform well on the condition that training and test samples have the same feature distribution. However, if training and test images are acquired with different scanners or scanning parameters, their feature distributions...

  8. Free-space wavelength-multiplexed optical scanner demonstration.

    Science.gov (United States)

    Yaqoob, Zahid; Riza, Nabeel A

    2002-09-10

    Experimental demonstration of a no-moving-parts free-space wavelength-multiplexed optical scanner (W-MOS) is presented. With fast tunable lasers or optical filters and planar wavelength dispersive elements such as diffraction gratings, this microsecond-speed scanner enables large several-centimeter apertures for subdegree angular scans. The proposed W-MOS design incorporates a unique optical amplifier and variable optical attenuator combination that enables the calibration and modulation of the scanner response, leading to any desired scanned laser beam power shaping. The experimental setup uses a tunable laser centered at 1560 nm and a 600-grooves/mm blazed reflection grating to accomplish an angular scan of 12.92 degrees as the source is tuned over an 80-nm bandwidth. The values for calculated maximum optical beam divergance, required wavelength resolution, beam-pointing accuracy, and measured scanner insertion loss are 1.076 mrad, 0.172 nm, 0.06 mrad, and 4.88 dB, respectively.

  9. Sea surface temperature mapping using a thermal infrared scanner

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R; Pandya, R; Mathur, K.M.; Charyulu, R; Rao, L.V.G.

    1 metre water column below the sea surface. A thermal infrared scanner developed by the Space Applications Centre (ISRO), Ahmedabad was operated on board R.V. Gaveshani in April/May 1984 for mapping SST over the eastern Arabian Sea. SST values...

  10. The economic potential of CT scanners for hardwood sawmills

    Science.gov (United States)

    Donald G. Hodges; Walter C. Anderson; Charles W. McMillin

    1990-01-01

    Research has demonstrated that a knowledge of internal log defects prior to sawing could improve lumber value yields significantly. This study evaluated the potential economic returns from investments in computerized tomographic (CT) scanners to detect internal defects in hardwood logs at southern sawmills. The results indicate that such investments would be profitable...

  11. Phosphor Scanner For Imaging X-Ray Diffraction

    Science.gov (United States)

    Carter, Daniel C.; Hecht, Diana L.; Witherow, William K.

    1992-01-01

    Improved optoelectronic scanning apparatus generates digitized image of x-ray image recorded in phosphor. Scanning fiber-optic probe supplies laser light stimulating luminescence in areas of phosphor exposed to x rays. Luminescence passes through probe and fiber to integrating sphere and photomultiplier. Sensitivity and resolution exceed previously available scanners. Intended for use in x-ray crystallography, medical radiography, and molecular biology.

  12. Benchmarking Advanced Control Algorithms for a Laser Scanner System

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Ordys, A.W.; Smillie, I.

    1996-01-01

    The paper describes tests performed on the laser scanner system toassess feasibility of modern control techniques in achieving a requiredperformance in the trajectory following problem. The two methods tested areQTR H-infinity and Predictive Control. The results are ilustated ona simulation example....

  13. Scanner image methodology (SIM) to measure dimensions of leaves ...

    African Journals Online (AJOL)

    A scanner image methodology was used to determine plant dimensions, such as leaf area, length and width. The values obtained using SIM were compared with those recorded by the LI-COR leaf area meter. Bias, linearity, reproducibility and repeatability (R&R) were evaluated for SIM. Different groups of leaves were ...

  14. Algorithms for Coastal-Zone Color-Scanner Data

    Science.gov (United States)

    1986-01-01

    Software for Nimbus-7 Coastal-Zone Color-Scanner (CZCS) derived products consists of set of scientific algorithms for extracting information from CZCS-gathered data. Software uses CZCS-generated Calibrated RadianceTemperature (CRT) tape as input and outputs computer-compatible tape and film product.

  15. Demonstration: A smartphone 3D functional brain scanner

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Stopczynski, Arkadiusz; Larsen, Jakob Eg

    We demonstrate a fully portable 3D real-time functional brain scanner consisting of a wireless 14-channel ‘Neuroheadset‘ (Emotiv EPOC) and a Nokia N900 smartphone. The novelty of our system is the ability to perform real-time functional brain imaging on a smartphone device, including stimulus...

  16. Design of active-neutron fuel rod scanner

    International Nuclear Information System (INIS)

    Griffith, G.W.; Menlove, H.O.

    1996-01-01

    An active-neutron fuel rod scanner has been designed for the assay of fissile materials in mixed oxide fuel rods. A 252 Cf source is located at the center of the scanner very near the through hole for the fuel rods. Spontaneous fission neutrons from the californium are moderated and induce fissions within the passing fuel rod. The rod continues past a combined gamma-ray and neutron shield where delayed gamma rays above 1 MeV are detected. We used the Monte Carlo code MCNP to design the scanner and review optimum materials and geometries. An inhomogeneous beryllium, graphite, and polyethylene moderator has been designed that uses source neutrons much more efficiently than assay systems using polyethylene moderators. Layers of borated polyethylene and tungsten are used to shield the detectors. Large NaI(Tl) detectors were selected to measure the delayed gamma rays. The enrichment zones of a thermal reactor fuel pin could be measured to within 1% counting statistics for practical rod speeds. Applications of the rod scanner include accountability of fissile material for safeguards applications, quality control of the fissile content in a fuel rod, and the verification of reactivity potential for mixed oxide fuels. (orig.)

  17. Advanced spot quality analysis in two-colour microarray experiments

    Directory of Open Access Journals (Sweden)

    Vetter Guillaume

    2008-09-01

    Full Text Available Abstract Background Image analysis of microarrays and, in particular, spot quantification and spot quality control, is one of the most important steps in statistical analysis of microarray data. Recent methods of spot quality control are still in early age of development, often leading to underestimation of true positive microarray features and, consequently, to loss of important biological information. Therefore, improving and standardizing the statistical approaches of spot quality control are essential to facilitate the overall analysis of microarray data and subsequent extraction of biological information. Findings We evaluated the performance of two image analysis packages MAIA and GenePix (GP using two complementary experimental approaches with a focus on the statistical analysis of spot quality factors. First, we developed control microarrays with a priori known fluorescence ratios to verify the accuracy and precision of the ratio estimation of signal intensities. Next, we developed advanced semi-automatic protocols of spot quality evaluation in MAIA and GP and compared their performance with available facilities of spot quantitative filtering in GP. We evaluated these algorithms for standardised spot quality analysis in a whole-genome microarray experiment assessing well-characterised transcriptional modifications induced by the transcription regulator SNAI1. Using a set of RT-PCR or qRT-PCR validated microarray data, we found that the semi-automatic protocol of spot quality control we developed with MAIA allowed recovering approximately 13% more spots and 38% more differentially expressed genes (at FDR = 5% than GP with default spot filtering conditions. Conclusion Careful control of spot quality characteristics with advanced spot quality evaluation can significantly increase the amount of confident and accurate data resulting in more meaningful biological conclusions.

  18. In vivo cellular imaging with microscopes enabled by MEMS scanners

    Science.gov (United States)

    Ra, Hyejun

    High-resolution optical imaging plays an important role in medical diagnosis and biomedical research. Confocal microscopy is a widely used imaging method for obtaining cellular and sub-cellular images of biological tissue in reflectance and fluorescence modes. Its characteristic optical sectioning capability also enables three-dimensional (3-D) image reconstruction. However, its use has mostly been limited to excised tissues due to the requirement of high numerical aperture (NA) lenses for cellular resolution. Microscope miniaturization can enable in vivo imaging to make possible early cancer diagnosis and biological studies in the innate environment. In this dissertation, microscope miniaturization for in vivo cellular imaging is presented. The dual-axes confocal (DAC) architecture overcomes limitations of the conventional single-axis confocal (SAC) architecture to allow for miniaturization with high resolution. A microelectromechanical systems (MEMS) scanner is the central imaging component that is key in miniaturization of the DAC architecture. The design, fabrication, and characterization of the two-dimensional (2-D) MEMS scanner are presented. The gimbaled MEMS scanner is fabricated on a double silicon-on-insulator (SOI) wafer and is actuated by self-aligned vertical electrostatic combdrives. The imaging performance of the MEMS scanner in a DAC configuration is shown in a breadboard microscope setup, where reflectance and fluorescence imaging is demonstrated. Then, the MEMS scanner is integrated into a miniature DAC microscope. The whole imaging system is integrated into a portable unit for research in small animal models of human biology and disease. In vivo 3-D imaging is demonstrated on mouse skin models showing gene transfer and siRNA silencing. The siRNA silencing process is sequentially imaged in one mouse over time.

  19. Method for calibration of an axial tomographic scanner

    International Nuclear Information System (INIS)

    Sparks, R.A.

    1977-01-01

    The method of calibrating an axial tomographic scanner including frame means having an opening therein in which an object to be examined is to be placed, source and detector means mounted on the frame means for directing one or more beams of penetrating radiation through the object from the source to the detector means, and means to rotate the scanner including the source and detector means about the object whereby a plurality of sets of data corresponding to the transmission or absorption by the object of a plurality of beams of penetrating radiation are collected; the calibration method comprising mounting calibration means supporting an adjustable centering member onto the frame means, positioning the adjustable centering member at approximately the center of rotation of the scanner, placing position-sensitive indicator means adjacent the approximately centered member, rotating the scanner and the calibration means mounted thereon at least one time and, if necessary, adjusting the positioning of the centering member until the centering member is coincident with the center of rotation of the scanner as determined by minimum deflection of the position-sensitive indicator means, rotating and translating the source and detector means and determining for each angular orientation of the frame means supporting the source and detector means the central position of each translational scan relative to the centered member and/or if a plurality of detectors are utilized with the detector means for each planar slice of the object being examined, the central position of each translational scan for each detector relative to the centered member

  20. NMR of geophysical drill cores with a mobile Halbach scanner

    International Nuclear Information System (INIS)

    Talnishnikh, E.

    2007-01-01

    This thesis is devoted to a mobile NMR with an improved Halbach scanner. This is a lightweight tube-shaped magnet with sensitive volume larger and a homogeneity of the magnetic field higher than the previous prototype version. The improved Halbach scanner is used for analysis of water-saturated drill cores and plugs with diameters up to 60 mm. To provide the analysis, the standard 1D technique with the CPMG sequence as well as 2D correlation experiments were successfully applied and adapted to study properties of fluid-saturated sediments. Afterwards the Halbach scanner was calibrated to fast non-destructive measurements of porosity, relaxation time distributions, and estimation of permeability. These properties can be calculated directly from the NMR data using the developed methodology. Any independent measurements of these properties with other methods are not needed. One of the main results of this work is the development of a new NMR on-line core scanner for measurements of porosity in long cylindrical and semi cylindrical drill cores. Also dedicated software was written to operate the NMR on-line core scanner. The physical background of this work is the study of the diffusion influence on transverse relaxation. The diffusion effect in the presence of internal gradients in porous media was probed by 1D and 2D experiments. The transverse relaxation time distributions obtained from 1D and from 2D experiments are comparable but different in fine details. Two new methodologies were developed based on the results of this study. First is the methodology quantifying the influence of diffusion in the internal gradients of water-saturated sediments on transverse relaxation from 2D correlation experiments. The second one is the correction of the permeability estimation from the NMR data taking in account the influence of the diffusion. Furthermore, PFG NMR technique was used to study restricted diffusion in the same kind of samples. Preliminary results are reported

  1. NMR of geophysical drill cores with a mobile Halbach scanner

    Energy Technology Data Exchange (ETDEWEB)

    Talnishnikh, E.

    2007-08-21

    This thesis is devoted to a mobile NMR with an improved Halbach scanner. This is a lightweight tube-shaped magnet with sensitive volume larger and a homogeneity of the magnetic field higher than the previous prototype version. The improved Halbach scanner is used for analysis of water-saturated drill cores and plugs with diameters up to 60 mm. To provide the analysis, the standard 1D technique with the CPMG sequence as well as 2D correlation experiments were successfully applied and adapted to study properties of fluid-saturated sediments. Afterwards the Halbach scanner was calibrated to fast non-destructive measurements of porosity, relaxation time distributions, and estimation of permeability. These properties can be calculated directly from the NMR data using the developed methodology. Any independent measurements of these properties with other methods are not needed. One of the main results of this work is the development of a new NMR on-line core scanner for measurements of porosity in long cylindrical and semi cylindrical drill cores. Also dedicated software was written to operate the NMR on-line core scanner. The physical background of this work is the study of the diffusion influence on transverse relaxation. The diffusion effect in the presence of internal gradients in porous media was probed by 1D and 2D experiments. The transverse relaxation time distributions obtained from 1D and from 2D experiments are comparable but different in fine details. Two new methodologies were developed based on the results of this study. First is the methodology quantifying the influence of diffusion in the internal gradients of water-saturated sediments on transverse relaxation from 2D correlation experiments. The second one is the correction of the permeability estimation from the NMR data taking in account the influence of the diffusion. Furthermore, PFG NMR technique was used to study restricted diffusion in the same kind of samples. Preliminary results are reported

  2. Significance analysis of lexical bias in microarray data

    Directory of Open Access Journals (Sweden)

    Falkow Stanley

    2003-04-01

    Full Text Available Abstract Background Genes that are determined to be significantly differentially regulated in microarray analyses often appear to have functional commonalities, such as being components of the same biochemical pathway. This results in certain words being under- or overrepresented in the list of genes. Distinguishing between biologically meaningful trends and artifacts of annotation and analysis procedures is of the utmost importance, as only true biological trends are of interest for further experimentation. A number of sophisticated methods for identification of significant lexical trends are currently available, but these methods are generally too cumbersome for practical use by most microarray users. Results We have developed a tool, LACK, for calculating the statistical significance of apparent lexical bias in microarray datasets. The frequency of a user-specified list of search terms in a list of genes which are differentially regulated is assessed for statistical significance by comparison to randomly generated datasets. The simplicity of the input files and user interface targets the average microarray user who wishes to have a statistical measure of apparent lexical trends in analyzed datasets without the need for bioinformatics skills. The software is available as Perl source or a Windows executable. Conclusion We have used LACK in our laboratory to generate biological hypotheses based on our microarray data. We demonstrate the program's utility using an example in which we confirm significant upregulation of SPI-2 pathogenicity island of Salmonella enterica serovar Typhimurium by the cation chelator dipyridyl.

  3. A Fisheye Viewer for microarray-based gene expression data.

    Science.gov (United States)

    Wu, Min; Thao, Cheng; Mu, Xiangming; Munson, Ethan V

    2006-10-13

    Microarray has been widely used to measure the relative amounts of every mRNA transcript from the genome in a single scan. Biologists have been accustomed to reading their experimental data directly from tables. However, microarray data are quite large and are stored in a series of files in a machine-readable format, so direct reading of the full data set is not feasible. The challenge is to design a user interface that allows biologists to usefully view large tables of raw microarray-based gene expression data. This paper presents one such interface--an electronic table (E-table) that uses fisheye distortion technology. The Fisheye Viewer for microarray-based gene expression data has been successfully developed to view MIAME data stored in the MAGE-ML format. The viewer can be downloaded from the project web site http://polaris.imt.uwm.edu:7777/fisheye/. The fisheye viewer was implemented in Java so that it could run on multiple platforms. We implemented the E-table by adapting JTable, a default table implementation in the Java Swing user interface library. Fisheye views use variable magnification to balance magnification for easy viewing and compression for maximizing the amount of data on the screen. This Fisheye Viewer is a lightweight but useful tool for biologists to quickly overview the raw microarray-based gene expression data in an E-table.

  4. A fisheye viewer for microarray-based gene expression data

    Directory of Open Access Journals (Sweden)

    Munson Ethan V

    2006-10-01

    Full Text Available Abstract Background Microarray has been widely used to measure the relative amounts of every mRNA transcript from the genome in a single scan. Biologists have been accustomed to reading their experimental data directly from tables. However, microarray data are quite large and are stored in a series of files in a machine-readable format, so direct reading of the full data set is not feasible. The challenge is to design a user interface that allows biologists to usefully view large tables of raw microarray-based gene expression data. This paper presents one such interface – an electronic table (E-table that uses fisheye distortion technology. Results The Fisheye Viewer for microarray-based gene expression data has been successfully developed to view MIAME data stored in the MAGE-ML format. The viewer can be downloaded from the project web site http://polaris.imt.uwm.edu:7777/fisheye/. The fisheye viewer was implemented in Java so that it could run on multiple platforms. We implemented the E-table by adapting JTable, a default table implementation in the Java Swing user interface library. Fisheye views use variable magnification to balance magnification for easy viewing and compression for maximizing the amount of data on the screen. Conclusion This Fisheye Viewer is a lightweight but useful tool for biologists to quickly overview the raw microarray-based gene expression data in an E-table.

  5. AN IMPROVED FUZZY CLUSTERING ALGORITHM FOR MICROARRAY IMAGE SPOTS SEGMENTATION

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-11-01

    Full Text Available An automatic cDNA microarray image processing using an improved fuzzy clustering algorithm is presented in this paper. The spot segmentation algorithm proposed uses the gridding technique developed by the authors earlier, for finding the co-ordinates of each spot in an image. Automatic cropping of spots from microarray image is done using these co-ordinates. The present paper proposes an improved fuzzy clustering algorithm Possibility fuzzy local information c means (PFLICM to segment the spot foreground (FG from background (BG. The PFLICM improves fuzzy local information c means (FLICM algorithm by incorporating typicality of a pixel along with gray level information and local spatial information. The performance of the algorithm is validated using a set of simulated cDNA microarray images added with different levels of AWGN noise. The strength of the algorithm is tested by computing the parameters such as the Segmentation matching factor (SMF, Probability of error (pe, Discrepancy distance (D and Normal mean square error (NMSE. SMF value obtained for PFLICM algorithm shows an improvement of 0.9 % and 0.7 % for high noise and low noise microarray images respectively compared to FLICM algorithm. The PFLICM algorithm is also applied on real microarray images and gene expression values are computed.

  6. Advanced Data Mining of Leukemia Cells Micro-Arrays

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2009-12-01

    Full Text Available This paper provides continuation and extensions of previous research by Segall and Pierce (2009a that discussed data mining for micro-array databases of Leukemia cells for primarily self-organized maps (SOM. As Segall and Pierce (2009a and Segall and Pierce (2009b the results of applying data mining are shown and discussed for the data categories of microarray databases of HL60, Jurkat, NB4 and U937 Leukemia cells that are also described in this article. First, a background section is provided on the work of others pertaining to the applications of data mining to micro-array databases of Leukemia cells and micro-array databases in general. As noted in predecessor article by Segall and Pierce (2009a, micro-array databases are one of the most popular functional genomics tools in use today. This research in this paper is intended to use advanced data mining technologies for better interpretations and knowledge discovery as generated by the patterns of gene expressions of HL60, Jurkat, NB4 and U937 Leukemia cells. The advanced data mining performed entailed using other data mining tools such as cubic clustering criterion, variable importance rankings, decision trees, and more detailed examinations of data mining statistics and study of other self-organized maps (SOM clustering regions of workspace as generated by SAS Enterprise Miner version 4. Conclusions and future directions of the research are also presented.

  7. Spot detection and image segmentation in DNA microarray data.

    Science.gov (United States)

    Qin, Li; Rueda, Luis; Ali, Adnan; Ngom, Alioune

    2005-01-01

    Following the invention of microarrays in 1994, the development and applications of this technology have grown exponentially. The numerous applications of microarray technology include clinical diagnosis and treatment, drug design and discovery, tumour detection, and environmental health research. One of the key issues in the experimental approaches utilising microarrays is to extract quantitative information from the spots, which represent genes in a given experiment. For this process, the initial stages are important and they influence future steps in the analysis. Identifying the spots and separating the background from the foreground is a fundamental problem in DNA microarray data analysis. In this review, we present an overview of state-of-the-art methods for microarray image segmentation. We discuss the foundations of the circle-shaped approach, adaptive shape segmentation, histogram-based methods and the recently introduced clustering-based techniques. We analytically show that clustering-based techniques are equivalent to the one-dimensional, standard k-means clustering algorithm that utilises the Euclidean distance.

  8. Probe Selection for DNA Microarrays using OligoWiz

    DEFF Research Database (Denmark)

    Wernersson, Rasmus; Juncker, Agnieszka; Nielsen, Henrik Bjørn

    2007-01-01

    Nucleotide abundance measurements using DNA microarray technology are possible only if appropriate probes complementary to the target nucleotides can be identified. Here we present a protocol for selecting DNA probes for microarrays using the OligoWiz application. OligoWiz is a client-server appl......Nucleotide abundance measurements using DNA microarray technology are possible only if appropriate probes complementary to the target nucleotides can be identified. Here we present a protocol for selecting DNA probes for microarrays using the OligoWiz application. OligoWiz is a client......-server application that offers a detailed graphical interface and real-time user interaction on the client side, and massive computer power and a large collection of species databases (400, summer 2007) on the server side. Probes are selected according to five weighted scores: cross-hybridization, deltaT(m), folding...... computer skills and can be executed from any Internet-connected computer. The probe selection procedure for a standard microarray design targeting all yeast transcripts can be completed in 1 h....

  9. Microarray-based screening of heat shock protein inhibitors.

    Science.gov (United States)

    Schax, Emilia; Walter, Johanna-Gabriela; Märzhäuser, Helene; Stahl, Frank; Scheper, Thomas; Agard, David A; Eichner, Simone; Kirschning, Andreas; Zeilinger, Carsten

    2014-06-20

    Based on the importance of heat shock proteins (HSPs) in diseases such as cancer, Alzheimer's disease or malaria, inhibitors of these chaperons are needed. Today's state-of-the-art techniques to identify HSP inhibitors are performed in microplate format, requiring large amounts of proteins and potential inhibitors. In contrast, we have developed a miniaturized protein microarray-based assay to identify novel inhibitors, allowing analysis with 300 pmol of protein. The assay is based on competitive binding of fluorescence-labeled ATP and potential inhibitors to the ATP-binding site of HSP. Therefore, the developed microarray enables the parallel analysis of different ATP-binding proteins on a single microarray. We have demonstrated the possibility of multiplexing by immobilizing full-length human HSP90α and HtpG of Helicobacter pylori on microarrays. Fluorescence-labeled ATP was competed by novel geldanamycin/reblastatin derivatives with IC50 values in the range of 0.5 nM to 4 μM and Z(*)-factors between 0.60 and 0.96. Our results demonstrate the potential of a target-oriented multiplexed protein microarray to identify novel inhibitors for different members of the HSP90 family. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Defense Commissaries: Issues Related to the Sale of Electronic Scanner Data

    National Research Council Canada - National Science Library

    1998-01-01

    In response to your request that we review DeCA'S sale of scanner data and its implementation of category management, this report identifies DeCA'S total revenue from selling scanner data and compares license revenues...

  11. A new generation of PET scanners for small animal studies

    International Nuclear Information System (INIS)

    Hegyesi, G.; Imrek, J.; Kalinka, G.; Molnar, J.; Novak, D.; Valastyan, I.; Balkay, L.; Emri, M.; Kis, S.; Tron, L.

    2008-01-01

    Complete text of publication follows. Research on small animal PET scanners has been a hot topic in recent years. These devices are used in the preclinical phases of drug tests and during the development of new radiopharmaceuticals. They also provide a cost efficient way to test new materials, new design concepts and new technologies that later can be used to build more efficient human medical imaging devices. The development of a PET scanner requires expertise on different fields, therefore a consortium was formed that brought together Hungarian academic and industrial partners: the Nuclear Research Institute (which has experience in the development of nuclear detectors and data acquisition systems), the PET Center of the University of Debrecen (which has clinical experience in the application of nuclear imaging devices and background in image processing software), Mediso Ltd. (which has been developing, manufacturing, selling and servicing medical imaging devices since 1990) and other academic partners. This consortium has been working together since 2003: the knowledge base acquired during the development of our small animal PET scanners (miniPET-I and miniPET-II) is now being utilized to build a commercial multimodal human PET scanner. The operation of a PET scanner is based on the simultaneous detection ('coincidence') of two gamma photons originating from a positron annihilation. In traditional PET scanners coincidence is detected by a central unit during the measurement. In our system there is no such central module: all detected single gamma events are recorded (list mode data acquisition), and the list of events are processed using a computer cluster (built from PCs). The usage of independent detector modules and commercial components reduce both development and maintenance costs. Also, this mode of data acquisition is more suitable for development purposes, since once the data is collected and stored it can be used many times to test different signal

  12. Digital Data Matrix Scanner Developnent At Marshall Space Flight Center

    Science.gov (United States)

    2004-01-01

    Research at NASA's Marshall Space Flight Center has resulted in a system for reading hidden identification codes using a hand-held magnetic scanner. It's an invention that could help businesses improve inventory management, enhance safety, improve security, and aid in recall efforts if defects are discovered. Two-dimensional Data Matrix symbols consisting of letters and numbers permanently etched on items for identification and resembling a small checkerboard pattern are more efficient and reliable than traditional bar codes, and can store up to 100 times more information. A team led by Fred Schramm of the Marshall Center's Technology Transfer Department, in partnership with PRI,Torrance, California, has developed a hand-held device that can read this special type of coded symbols, even if covered by up to six layers of paint. Before this new technology was available, matrix symbols were read with optical scanners, and only if the codes were visible. This latest improvement in digital Data Matrix technologies offers greater flexibility for businesses and industries already using the marking system. Paint, inks, and pastes containing magnetic properties are applied in matrix symbol patterns to objects with two-dimensional codes, and the codes are read by a magnetic scanner, even after being covered with paint or other coatings. The ability to read hidden matrix symbols promises a wide range of benefits in a number of fields, including airlines, electronics, healthcare, and the automotive industry. Many industries would like to hide information on a part, so it can be read only by the party who put it there. For instance, the automotive industry uses direct parts marking for inventory control, but for aesthetic purposes the marks often need to be invisible. Symbols have been applied to a variety of materials, including metal, plastic, glass, paper, fabric and foam, on everything from electronic parts to pharmaceuticals to livestock. The portability of the hand

  13. A dedicated tool for PET scanner simulations using FLUKA

    International Nuclear Information System (INIS)

    Ortega, P.G.; Boehlen, T.T.; Cerutti, F.; Chin, M.P.W.; Ferrari, A.; Mancini, C.; Vlachoudis, V.; Mairani, A.; Sala, Paola R.

    2013-06-01

    Positron emission tomography (PET) is a well-established medical imaging technique. It is based on the detection of pairs of annihilation gamma rays from a beta+-emitting radionuclide, usually inoculated in the body via a biologically active molecule. Apart from its wide-spread use for clinical diagnosis, new applications are proposed. This includes notably the usage of PET for treatment monitoring of radiation therapy with protons and ions. PET is currently the only available technique for non-invasive monitoring of ion beam dose delivery, which was tested in several clinical pilot studies. For hadrontherapy, the distribution of positron emitters, produced by the ion beam, can be analyzed to verify the correct treatment delivery. The adaptation of previous PET scanners to new environments and the necessity of more precise diagnostics by better image quality triggered the development of new PET scanner designs. The use of Monte Carlo (MC) codes is essential in the early stages of the scanner design to simulate the transport of particles and nuclear interactions from therapeutic ion beams or radioisotopes and to predict radiation fields in tissues and radiation emerging from the patient. In particular, range verification using PET is based on the comparison of detected and simulated activity distributions. The accuracy of the MC code for the relevant physics processes is obviously essential for such applications. In this work we present new developments of the physics models with importance for PET monitoring and integrated tools for PET scanner simulations for FLUKA, a fully-integrated MC particle-transport code, which is widely used for an extended range of applications (accelerator shielding, detector and target design, calorimetry, activation, dosimetry, medical physics, radiobiology, ...). The developed tools include a PET scanner geometry builder and a dedicated scoring routine for coincident event determination. The geometry builder allows the efficient

  14. DNA microarray data and contextual analysis of correlation graphs

    Directory of Open Access Journals (Sweden)

    Hingamp Pascal

    2003-04-01

    Full Text Available Abstract Background DNA microarrays are used to produce large sets of expression measurements from which specific biological information is sought. Their analysis requires efficient and reliable algorithms for dimensional reduction, classification and annotation. Results We study networks of co-expressed genes obtained from DNA microarray experiments. The mathematical concept of curvature on graphs is used to group genes or samples into clusters to which relevant gene or sample annotations are automatically assigned. Application to publicly available yeast and human lymphoma data demonstrates the reliability of the method in spite of its simplicity, especially with respect to the small number of parameters involved. Conclusions We provide a method for automatically determining relevant gene clusters among the many genes monitored with microarrays. The automatic annotations and the graphical interface improve the readability of the data. A C++ implementation, called Trixy, is available from http://tagc.univ-mrs.fr/bioinformatics/trixy.html.

  15. MICROARRAY IMAGE GRIDDING USING GRID LINE REFINEMENT TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-05-01

    Full Text Available An important stage in microarray image analysis is gridding. Microarray image gridding is done to locate sub arrays in a microarray image and find co-ordinates of spots within each sub array. For accurate identification of spots, most of the proposed gridding methods require human intervention. In this paper a fully automatic gridding method which enhances spot intensity in the preprocessing step as per a histogram based threshold method is used. The gridding step finds co-ordinates of spots from horizontal and vertical profile of the image. To correct errors due to the grid line placement, a grid line refinement technique is proposed. The algorithm is applied on different image databases and results are compared based on spot detection accuracy and time. An average spot detection accuracy of 95.06% depicts the proposed method’s flexibility and accuracy in finding the spot co-ordinates for different database images.

  16. A Versatile Microarray Platform for Capturing Rare Cells

    Science.gov (United States)

    Brinkmann, Falko; Hirtz, Michael; Haller, Anna; Gorges, Tobias M.; Vellekoop, Michael J.; Riethdorf, Sabine; Müller, Volkmar; Pantel, Klaus; Fuchs, Harald

    2015-10-01

    Analyses of rare events occurring at extremely low frequencies in body fluids are still challenging. We established a versatile microarray-based platform able to capture single target cells from large background populations. As use case we chose the challenging application of detecting circulating tumor cells (CTCs) - about one cell in a billion normal blood cells. After incubation with an antibody cocktail, targeted cells are extracted on a microarray in a microfluidic chip. The accessibility of our platform allows for subsequent recovery of targets for further analysis. The microarray facilitates exclusion of false positive capture events by co-localization allowing for detection without fluorescent labelling. Analyzing blood samples from cancer patients with our platform reached and partly outreached gold standard performance, demonstrating feasibility for clinical application. Clinical researchers free choice of antibody cocktail without need for altered chip manufacturing or incubation protocol, allows virtual arbitrary targeting of capture species and therefore wide spread applications in biomedical sciences.

  17. A Cost Effective Multi-Spectral Scanner for Natural Gas Detection

    Energy Technology Data Exchange (ETDEWEB)

    Yudaya Sivathanu; Jongmook Lim; Vinoo Narayanan; Seonghyeon Park

    2005-12-07

    The objective of this project is to design, fabricate and demonstrate a cost effective, multi-spectral scanner for natural gas leak detection in transmission and distribution pipelines. During the first year of the project, a laboratory version of the multi-spectral scanner was designed, fabricated, and tested at EnUrga Inc. The multi-spectral scanner was also evaluated using a blind Department of Energy study at the Rocky Mountain Oilfield Testing Center. The performance of the scanner was inconsistent during the blind study. However, most of the leaks were outside the view of the multi-spectral scanner that was developed during the first year of the project. Therefore, a definite evaluation of the capability of the scanner was not obtained. Despite the results, sufficient number of plumes was detected fully confirming the feasibility of the multi-spectral scanner. During the second year, the optical design of the scanner was changed to improve the sensitivity of the system. Laboratory tests show that the system can reliably detect small leaks (20 SCFH) at 30 to 50 feet. A prototype scanner was built and evaluated during the second year of the project. Only laboratory evaluations were completed during the second year. The laboratory evaluations show the feasibility of using the scanner to determine natural gas pipeline leaks. Further field evaluations and optimization of the scanner are required before commercialization of the scanner can be initiated.

  18. Addressing Spatial Variability of Surface-Layer Wind with Long-Range WindScanners

    DEFF Research Database (Denmark)

    Berg, Jacob; Vasiljevic, Nikola; Kelly, Mark C.

    2015-01-01

    of the WindScanner data is high, although the fidelity of the estimated vertical velocity component is significantly limited by the elevation angles of the scanner heads. The system of long-range WindScanners presented in this paper is close to being fully operational, with the pilot study herein serving...

  19. AMDA: an R package for the automated microarray data analysis

    Directory of Open Access Journals (Sweden)

    Foti Maria

    2006-07-01

    Full Text Available Abstract Background Microarrays are routinely used to assess mRNA transcript levels on a genome-wide scale. Large amount of microarray datasets are now available in several databases, and new experiments are constantly being performed. In spite of this fact, few and limited tools exist for quickly and easily analyzing the results. Microarray analysis can be challenging for researchers without the necessary training and it can be time-consuming for service providers with many users. Results To address these problems we have developed an automated microarray data analysis (AMDA software, which provides scientists with an easy and integrated system for the analysis of Affymetrix microarray experiments. AMDA is free and it is available as an R package. It is based on the Bioconductor project that provides a number of powerful bioinformatics and microarray analysis tools. This automated pipeline integrates different functions available in the R and Bioconductor projects with newly developed functions. AMDA covers all of the steps, performing a full data analysis, including image analysis, quality controls, normalization, selection of differentially expressed genes, clustering, correspondence analysis and functional evaluation. Finally a LaTEX document is dynamically generated depending on the performed analysis steps. The generated report contains comments and analysis results as well as the references to several files for a deeper investigation. Conclusion AMDA is freely available as an R package under the GPL license. The package as well as an example analysis report can be downloaded in the Services/Bioinformatics section of the Genopolis http://www.genopolis.it/

  20. Dimensionality Reduction for Hyperspectral Data Based on Class-Aware Tensor Neighborhood Graph and Patch Alignment.

    Science.gov (United States)

    Gao, Yang; Wang, Xuesong; Cheng, Yuhu; Wang, Z Jane

    2015-08-01

    To take full advantage of hyperspectral information, to avoid data redundancy and to address the curse of dimensionality concern, dimensionality reduction (DR) becomes particularly important to analyze hyperspectral data. Exploring the tensor characteristic of hyperspectral data, a DR algorithm based on class-aware tensor neighborhood graph and patch alignment is proposed here. First, hyperspectral data are represented in the tensor form through a window field to keep the spatial information of each pixel. Second, using a tensor distance criterion, a class-aware tensor neighborhood graph containing discriminating information is obtained. In the third step, employing the patch alignment framework extended to the tensor space, we can obtain global optimal spectral-spatial information. Finally, the solution of the tensor subspace is calculated using an iterative method and low-dimensional projection matrixes for hyperspectral data are obtained accordingly. The proposed method effectively explores the spectral and spatial information in hyperspectral data simultaneously. Experimental results on 3 real hyperspectral datasets show that, compared with some popular vector- and tensor-based DR algorithms, the proposed method can yield better performance with less tensor training samples required.

  1. Tree Classification with Fused Mobile Laser Scanning and Hyperspectral Data

    Science.gov (United States)

    Puttonen, Eetu; Jaakkola, Anttoni; Litkey, Paula; Hyyppä, Juha

    2011-01-01

    Mobile Laser Scanning data were collected simultaneously with hyperspectral data using the Finnish Geodetic Institute Sensei system. The data were tested for tree species classification. The test area was an urban garden in the City of Espoo, Finland. Point clouds representing 168 individual tree specimens of 23 tree species were determined manually. The classification of the trees was done using first only the spatial data from point clouds, then with only the spectral data obtained with a spectrometer, and finally with the combined spatial and hyperspectral data from both sensors. Two classification tests were performed: the separation of coniferous and deciduous trees, and the identification of individual tree species. All determined tree specimens were used in distinguishing coniferous and deciduous trees. A subset of 133 trees and 10 tree species was used in the tree species classification. The best classification results for the fused data were 95.8% for the separation of the coniferous and deciduous classes. The best overall tree species classification succeeded with 83.5% accuracy for the best tested fused data feature combination. The respective results for paired structural features derived from the laser point cloud were 90.5% for the separation of the coniferous and deciduous classes and 65.4% for the species classification. Classification accuracies with paired hyperspectral reflectance value data were 90.5% for the separation of coniferous and deciduous classes and 62.4% for different species. The results are among the first of their kind and they show that mobile collected fused data outperformed single-sensor data in both classification tests and by a significant margin. PMID:22163894

  2. The software and algorithms for hyperspectral data processing

    Science.gov (United States)

    Shyrayeva, Anhelina; Martinov, Anton; Ivanov, Victor; Katkovsky, Leonid

    2017-04-01

    Hyperspectral remote sensing technique is widely used for collecting and processing -information about the Earth's surface objects. Hyperspectral data are combined to form a three-dimensional (x, y, λ) data cube. Department of Aerospace Research of the Institute of Applied Physical Problems of the Belarusian State University presents a general model of the software for hyperspectral image data analysis and processing. The software runs in Windows XP/7/8/8.1/10 environment on any personal computer. This complex has been has been written in C++ language using QT framework and OpenGL for graphical data visualization. The software has flexible structure that consists of a set of independent plugins. Each plugin was compiled as Qt Plugin and represents Windows Dynamic library (dll). Plugins can be categorized in terms of data reading types, data visualization (3D, 2D, 1D) and data processing The software has various in-built functions for statistical and mathematical analysis, signal processing functions like direct smoothing function for moving average, Savitzky-Golay smoothing technique, RGB correction, histogram transformation, and atmospheric correction. The software provides two author's engineering techniques for the solution of atmospheric correction problem: iteration method of refinement of spectral albedo's parameters using Libradtran and analytical least square method. The main advantages of these methods are high rate of processing (several minutes for 1 GB data) and low relative error in albedo retrieval (less than 15%). Also, the software supports work with spectral libraries, region of interest (ROI) selection, spectral analysis such as cluster-type image classification and automatic hypercube spectrum comparison by similarity criterion with similar ones from spectral libraries, and vice versa. The software deals with different kinds of spectral information in order to identify and distinguish spectrally unique materials. Also, the following advantages

  3. Online hyperspectral imaging system for evaluating quality of agricultural products

    Science.gov (United States)

    Mo, Changyeun; Kim, Giyoung; Lim, Jongguk

    2017-06-01

    The consumption of fresh-cut agricultural produce in Korea has been growing. The browning of fresh-cut vegetables that occurs during storage and foreign substances such as worms and slugs are some of the main causes of consumers' concerns with respect to safety and hygiene. The purpose of this study is to develop an on-line system for evaluating quality of agricultural products using hyperspectral imaging technology. The online evaluation system with single visible-near infrared hyperspectral camera in the range of 400 nm to 1000 nm that can assess quality of both surfaces of agricultural products such as fresh-cut lettuce was designed. Algorithms to detect browning surface were developed for this system. The optimal wavebands for discriminating between browning and sound lettuce as well as between browning lettuce and the conveyor belt were investigated using the correlation analysis and the one-way analysis of variance method. The imaging algorithms to discriminate the browning lettuces were developed using the optimal wavebands. The ratio image (RI) algorithm of the 533 nm and 697 nm images (RI533/697) for abaxial surface lettuce and the ratio image algorithm (RI533/697) and subtraction image (SI) algorithm (SI538-697) for adaxial surface lettuce had the highest classification accuracies. The classification accuracy of browning and sound lettuce was 100.0% and above 96.0%, respectively, for the both surfaces. The overall results show that the online hyperspectral imaging system could potentially be used to assess quality of agricultural products.

  4. Hyperspectral image segmentation using a cooperative nonparametric approach

    Science.gov (United States)

    Taher, Akar; Chehdi, Kacem; Cariou, Claude

    2013-10-01

    In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.

  5. Label and Label-Free Detection Techniques for Protein Microarrays

    Directory of Open Access Journals (Sweden)

    Amir Syahir

    2015-04-01

    Full Text Available Protein microarray technology has gone through numerous innovative developments in recent decades. In this review, we focus on the development of protein detection methods embedded in the technology. Early microarrays utilized useful chromophores and versatile biochemical techniques dominated by high-throughput illumination. Recently, the realization of label-free techniques has been greatly advanced by the combination of knowledge in material sciences, computational design and nanofabrication. These rapidly advancing techniques aim to provide data without the intervention of label molecules. Here, we present a brief overview of this remarkable innovation from the perspectives of label and label-free techniques in transducing nano‑biological events.

  6. Advanced Data Mining of Leukemia Cells Micro-Arrays

    OpenAIRE

    Richard S. Segall; Ryan M. Pierce

    2009-01-01

    This paper provides continuation and extensions of previous research by Segall and Pierce (2009a) that discussed data mining for micro-array databases of Leukemia cells for primarily self-organized maps (SOM). As Segall and Pierce (2009a) and Segall and Pierce (2009b) the results of applying data mining are shown and discussed for the data categories of microarray databases of HL60, Jurkat, NB4 and U937 Leukemia cells that are also described in this article. First, a background section is pro...

  7. A new crystal whole-body scanner for positron emitters

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.; Kubesch, R.; Lorenz, W.J.; Woerner, P.

    1980-01-01

    A multicrystal whole body scanner for positron emitters has been constructed. The annihilation quanta are measured in two opposing detector banks. Each detector bank consists of 64 NaI crystals of 1.5'' diameter x 3'' length. Directly opposing single detectors are in coincidence. The patient moves linearly between the stationary transverse detector banks. The scanning area of the system is 64 x 192 cm 2 . The spatial resolution is 2 cm at a sampling distance of 1 cm. The sensitivity is 6400 counts/s for a pure positron flood source with 1 μCi/cm 2 . The system is controlled by a microcomputer (DEC LSI-11). The scintigrams are shown on a display. Absolute activities can be calculated by mathematical comparison of consecutive emission and transmission scans. The design of the positron scanner and its capacibilities are described. Experimental and initial clinical results are presented. (author)

  8. Beam Dumping Ghost Signals in Electric Sweep Scanners

    International Nuclear Information System (INIS)

    Stockli, M.P.; Leitner, M.; Keller, R.; Moehs, D.P.; Welton, R.F.

    2005-01-01

    Over the last 20 years many labs started to use Allison scanners to measure low-energy ion beam emittances. We show that large trajectory angles produce ghost signals due to the impact of the beamlet on the electric deflection plates. The strength of the ghost signal is proportional to the amount of beam entering the scanner. Depending on the ions and their velocity, ghost signals can have the opposite polarity as the main beam signals or the same polarity. These ghost signals are easily overlooked because they partly overlap the real signals, they are mostly below the 1% level, and they are often hidden in the noise. However, they cause significant errors in emittance estimates because they are associated with large trajectory angles. The strength of ghost signals, and the associated errors, can be drastically reduced with a simple modification of the deflection plates

  9. Beam dumping ghost signals in electric sweep scanners

    International Nuclear Information System (INIS)

    Stockli, M.P.; SNS Project, Oak Ridge; Tennessee U.; Leitner, M.; LBL, Berkeley; Moehs, D.P.; Keller, R.; LBL, Berkeley; Welton, R.F.; SNS Project, Oak Ridge

    2004-01-01

    Over the last 20 years many labs started to use Allison scanners to measure loW--energy ion beam emittances. We show that large trajectory angles produce ghost signals due to the impact of the beamlet on the electric deflection plates. The strength of the ghost signal is proportional to the amount of beam entering the scanner. Depending on the ions and their velocity, ghost signals can have the opposite polarity as the main beam signals or the same polarity. These ghost signals are easily overlooked because they partly overlap the real signals, they are mostly below the 1% level, and they are often hidden in the noise. However, they cause significant errors in emittance estimates because they are associated with large trajectory angles. The strength of ghost signals, and the associated errors, can be drastically reduced with a simple modification of the deflection plates

  10. Wire Scanner Beam Profile Measurements: LANSCE Facility Beam Development

    International Nuclear Information System (INIS)

    Gilpatrick, John D.; Batygin, Yuri K.; Gonzales, Fermin; Gruchalla, Michael E.; Kutac, Vincent G.; Martinez, Derwin; Sedillo, James Daniel; Pillai, Chandra; Rodriguez Esparza, Sergio; Smith, Brian G.

    2012-01-01

    The Los Alamos Neutron Science Center (LANSCE) is replacing Wire Scanner (WS) beam profile measurement systems. Three beam development tests have taken place to test the new wire scanners under beam conditions. These beam development tests have integrated the WS actuator, cable plant, electronics processors and associated software and have used H - beams of different beam energy and current conditions. In addition, the WS measurement-system beam tests verified actuator control systems for minimum profile bin repeatability and speed, checked for actuator backlash and positional stability, tested the replacement of simple broadband potentiometers with narrow band resolvers, and tested resolver use with National Instruments Compact Reconfigurable Input and Output (cRIO) Virtual Instrumentation. These beam tests also have verified how trans-impedance amplifiers react with various types of beam line background noise and how noise currents were not generated. This paper will describe these beam development tests and show some resulting data.

  11. Wire Scanner Beam Profile Measurements: LANSCE Facility Beam Development

    Energy Technology Data Exchange (ETDEWEB)

    Gilpatrick, John D. [Los Alamos National Laboratory; Batygin, Yuri K. [Los Alamos National Laboratory; Gonzales, Fermin [Los Alamos National Laboratory; Gruchalla, Michael E. [Los Alamos National Laboratory; Kutac, Vincent G. [Los Alamos National Laboratory; Martinez, Derwin [Los Alamos National Laboratory; Sedillo, James Daniel [Los Alamos National Laboratory; Pillai, Chandra [Los Alamos National Laboratory; Rodriguez Esparza, Sergio [Los Alamos National Laboratory; Smith, Brian G. [Los Alamos National Laboratory

    2012-05-15

    The Los Alamos Neutron Science Center (LANSCE) is replacing Wire Scanner (WS) beam profile measurement systems. Three beam development tests have taken place to test the new wire scanners under beam conditions. These beam development tests have integrated the WS actuator, cable plant, electronics processors and associated software and have used H{sup -} beams of different beam energy and current conditions. In addition, the WS measurement-system beam tests verified actuator control systems for minimum profile bin repeatability and speed, checked for actuator backlash and positional stability, tested the replacement of simple broadband potentiometers with narrow band resolvers, and tested resolver use with National Instruments Compact Reconfigurable Input and Output (cRIO) Virtual Instrumentation. These beam tests also have verified how trans-impedance amplifiers react with various types of beam line background noise and how noise currents were not generated. This paper will describe these beam development tests and show some resulting data.

  12. Linac beam core modeling from wire-scanner data

    International Nuclear Information System (INIS)

    Law, A.G.

    1977-08-01

    This study introduces mathematical modeling of accelerator beams from data collected by wire scanners. Details about a beam core D(x,x',y,y') are examined in several situations: (a) for a discretization of the projection into xy-space, a maximum-entropy solution and a minimum-norm solution are developed and discussed, (b) for undiscretized xy-subspace, a two-dimensional Gaussian approximation D(x,.,y,.) = a exp [α(x-x 0 ) 2 + β(x-x 0 )(y-y 0 ) + γ(y-y 0 ) 2 ] is obtained by least squares, and (c) for four-dimensional space, the fit of a single Gaussian to data from a succession of wire scanners is investigated

  13. Shielding design for testing room of large container scanner

    International Nuclear Information System (INIS)

    Liu Yisi; Miao Qitian; Zhou Liye

    1997-01-01

    Testing facility for large container scanner is a most advanced anti-smuggle tool. The X-ray scanning principle is adopted in this system. The X-ray was collimated a ted as a fan-shape beam. The accelerator only supplies the ray beam when the container is scanned. The irradiation time is less than one minute per test. The X-ray burst irradiation and highly collimated a ted scanning beam of this system is different from the common industrial irradiation accelerator. The shielding design of the 1:1 large container scanner introduced has better collimation level because of tri-collimation. The irradiation dose is less than 150 μGy per test, which is obviously lower than importations

  14. Robust Object Segmentation Using a Multi-Layer Laser Scanner

    Science.gov (United States)

    Kim, Beomseong; Choi, Baehoon; Yoo, Minkyun; Kim, Hyunju; Kim, Euntai

    2014-01-01

    The major problem in an advanced driver assistance system (ADAS) is the proper use of sensor measurements and recognition of the surrounding environment. To this end, there are several types of sensors to consider, one of which is the laser scanner. In this paper, we propose a method to segment the measurement of the surrounding environment as obtained by a multi-layer laser scanner. In the segmentation, a full set of measurements is decomposed into several segments, each representing a single object. Sometimes a ghost is detected due to the ground or fog, and the ghost has to be eliminated to ensure the stability of the system. The proposed method is implemented on a real vehicle, and its performance is tested in a real-world environment. The experiments show that the proposed method demonstrates good performance in many real-life situations. PMID:25356645

  15. A Novel Atomic Force Microscope with Multi-Mode Scanner

    International Nuclear Information System (INIS)

    Qin, Chun; Zhang, Haijun; Xu, Rui; Han, Xu; Wang, Shuying

    2016-01-01

    A new type of atomic force microscope (AFM) with multi-mode scanner is proposed. The AFM system provides more than four scanning modes using a specially designed scanner with three tube piezoelectric ceramics and three stack piezoelectric ceramics. Sample scanning of small range with high resolution can be realized by using tube piezos, meanwhile, large range scanning can be achieved by stack piezos. Furthermore, the combination with tube piezos and stack piezos not only realizes high-resolution scanning of small samples with large- scale fluctuation structure, but also achieves small range area-selecting scanning. Corresponding experiments are carried out in terms of four different scanning modes showing that the AFM is of reliable stability, high resolution and can be widely applied in the fields of micro/nano-technology. (paper)

  16. Fabrication of Biomolecule Microarrays for Cell Immobilization Using Automated Microcontact Printing.

    Science.gov (United States)

    Foncy, Julie; Estève, Aurore; Degache, Amélie; Colin, Camille; Cau, Jean Christophe; Malaquin, Laurent; Vieu, Christophe; Trévisiol, Emmanuelle

    2018-01-01

    Biomolecule microarrays are generally produced by conventional microarrayer, i.e., by contact or inkjet printing. Microcontact printing represents an alternative way of deposition of biomolecules on solid supports but even if various biomolecules have been successfully microcontact printed, the production of biomolecule microarrays in routine by microcontact printing remains a challenging task and needs an effective, fast, robust, and low-cost automation process. Here, we describe the production of biomolecule microarrays composed of extracellular matrix protein for the fabrication of cell microarrays by using an automated microcontact printing device. Large scale cell microarrays can be reproducibly obtained by this method.

  17. [Hyperspectral remote sensing in monitoring the vegetation heavy metal pollution].

    Science.gov (United States)

    Li, Na; Lü, Jian-sheng; Altemann, W

    2010-09-01

    Mine exploitation aggravates the environment pollution. The large amount of heavy metal element in the drainage of slag from the mine pollutes the soil seriously, doing harm to the vegetation growing and human health. The investigation of mining environment pollution is urgent, in which remote sensing, as a new technique, helps a lot. In the present paper, copper mine in Dexing was selected as the study area and China sumac as the study plant. Samples and spectral data in field were gathered and analyzed in lab. The regression model from spectral characteristics for heavy metal content was built, and the feasibility of hyperspectral remote sensing in environment pollution monitoring was testified.

  18. PRACTICAL APPROACH FOR HYPERSPECTRAL IMAGE PROCESSING IN PYTHON

    OpenAIRE

    Annala, L.; Eskelinen, M. A.; Hämäläinen, J.; Riihinen, A.; Pölönen, I.

    2018-01-01

    Python is a very popular programming language among data scientists around the world. Python can also be used in hyperspectral data analysis. There are some toolboxes designed for spectral imaging, such as Spectral Python and HyperSpy, but there is a need for analysis pipeline, which is easy to use and agile for different solutions. We propose a Python pipeline which is built on packages xarray, Holoviews and scikit-learn. We have developed some of own tools, MaskAccessor, VisualisorAccessor ...

  19. Classification of fecal contamination on leafy greens by hyperspectral imaging

    Science.gov (United States)

    Yang, Chun-Chieh; Jun, Won; Kim, Moon S.; Chao, Kaunglin; Kang, Sukwon; Chan, Diane E.; Lefcourt, Alan

    2010-04-01

    This paper reported the development of hyperspectral fluorescence imaging system using ultraviolet-A excitation (320-400 nm) for detection of bovine fecal contaminants on the abaxial and adaxial surfaces of romaine lettuce and baby spinach leaves. Six spots of fecal contamination were applied to each of 40 lettuce and 40 spinach leaves. In this study, the wavebands at 666 nm and 680 nm were selected by the correlation analysis. The two-band ratio, 666 nm / 680 nm, of fluorescence intensity was used to differentiate the contaminated spots from uncontaminated leaf area. The proposed method could accurately detect all of the contaminated spots.

  20. Temporal Variability of Observed and Simulated Hyperspectral Earth Reflectance

    Science.gov (United States)

    Roberts, Yolanda; Pilewskie, Peter; Kindel, Bruce; Feldman, Daniel; Collins, William D.

    2012-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is a climate observation system designed to study Earth's climate variability with unprecedented absolute radiometric accuracy and SI traceability. Observation System Simulation Experiments (OSSEs) were developed using GCM output and MODTRAN to simulate CLARREO reflectance measurements during the 21st century as a design tool for the CLARREO hyperspectral shortwave imager. With OSSE simulations of hyperspectral reflectance, Feldman et al. [2011a,b] found that shortwave reflectance is able to detect changes in climate variables during the 21st century and improve time-to-detection compared to broadband measurements. The OSSE has been a powerful tool in the design of the CLARREO imager and for understanding the effect of climate change on the spectral variability of reflectance, but it is important to evaluate how well the OSSE simulates the Earth's present-day spectral variability. For this evaluation we have used hyperspectral reflectance measurements from the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY), a shortwave spectrometer that was operational between March 2002 and April 2012. To study the spectral variability of SCIAMACHY-measured and OSSE-simulated reflectance, we used principal component analysis (PCA), a spectral decomposition technique that identifies dominant modes of variability in a multivariate data set. Using quantitative comparisons of the OSSE and SCIAMACHY PCs, we have quantified how well the OSSE captures the spectral variability of Earth?s climate system at the beginning of the 21st century relative to SCIAMACHY measurements. These results showed that the OSSE and SCIAMACHY data sets share over 99% of their total variance in 2004. Using the PCs and the temporally distributed reflectance spectra projected onto the PCs (PC scores), we can study the temporal variability of the observed and simulated reflectance spectra. Multivariate time

  1. Red Blood Cell Count Automation Using Microscopic Hyperspectral Imaging Technology.

    Science.gov (United States)

    Li, Qingli; Zhou, Mei; Liu, Hongying; Wang, Yiting; Guo, Fangmin

    2015-12-01

    Red blood cell counts have been proven to be one of the most frequently performed blood tests and are valuable for early diagnosis of some diseases. This paper describes an automated red blood cell counting method based on microscopic hyperspectral imaging technology. Unlike the light microscopy-based red blood count methods, a combined spatial and spectral algorithm is proposed to identify red blood cells by integrating active contour models and automated two-dimensional k-means with spectral angle mapper algorithm. Experimental results show that the proposed algorithm has better performance than spatial based algorithm because the new algorithm can jointly use the spatial and spectral information of blood cells.

  2. Pixelated camouflage patterns from the perspective of hyperspectral imaging

    Science.gov (United States)

    Racek, František; Jobánek, Adam; Baláž, Teodor; Krejčí, Jaroslav

    2016-10-01

    Pixelated camouflage patterns fulfill the role of both principles the matching and the disrupting that are exploited for blending the target into the background. It means that pixelated pattern should respect natural background in spectral and spatial characteristics embodied in micro and macro patterns. The HS imaging plays the similar, however the reverse role in the field of reconnaissance systems. The HS camera fundamentally records and extracts both the spectral and spatial information belonging to the recorded scenery. Therefore, the article deals with problems of hyperspectral (HS) imaging and subsequent processing of HS images of pixelated camouflage patterns which are among others characterized by their specific spatial frequency heterogeneity.

  3. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  4. Enhanced methodology of focus control and monitoring on scanner tool

    Science.gov (United States)

    Chen, Yen-Jen; Kim, Young Ki; Hao, Xueli; Gomez, Juan-Manuel; Tian, Ye; Kamalizadeh, Ferhad; Hanson, Justin K.

    2017-03-01

    As the demand of the technology node shrinks from 14nm to 7nm, the reliability of tool monitoring techniques in advanced semiconductor fabs to achieve high yield and quality becomes more critical. Tool health monitoring methods involve periodic sampling of moderately processed test wafers to detect for particles, defects, and tool stability in order to ensure proper tool health. For lithography TWINSCAN scanner tools, the requirements for overlay stability and focus control are very strict. Current scanner tool health monitoring methods include running BaseLiner to ensure proper tool stability on a periodic basis. The focus measurement on YIELDSTAR by real-time or library-based reconstruction of critical dimensions (CD) and side wall angle (SWA) has been demonstrated as an accurate metrology input to the control loop. The high accuracy and repeatability of the YIELDSTAR focus measurement provides a common reference of scanner setup and user process. In order to further improve the metrology and matching performance, Diffraction Based Focus (DBF) metrology enabling accurate, fast, and non-destructive focus acquisition, has been successfully utilized for focus monitoring/control of TWINSCAN NXT immersion scanners. The optimal DBF target was determined to have minimized dose crosstalk, dynamic precision, set-get residual, and lens aberration sensitivity. By exploiting this new measurement target design, 80% improvement in tool-to-tool matching, >16% improvement in run-to-run mean focus stability, and >32% improvement in focus uniformity have been demonstrated compared to the previous BaseLiner methodology. Matching control and monitoring on multiple illumination conditions, opens an avenue to significantly reduce Focus-Exposure Matrix (FEM) wafer exposure for new product/layer best focus (BF) setup.

  5. Automatic inventory of components by laser 3D scanner

    International Nuclear Information System (INIS)

    Rodriguez Garcia, R.; Munoz Prieto, C.; Sarti Fernandez, F.

    2014-01-01

    One of the existing needs in nuclear decommissioning projects is to provide an inventory of components to be dismantled, which is available from its spatial location and elements that exist in your environment. The Laser scanner technology is a system of data acquisition that allows 3D models composed of millions of points, it's models with pinpoint accuracy and are available in a very short space of time. (Author)

  6. Concrete hardened characterization using table scanner and microtomography computed

    International Nuclear Information System (INIS)

    Wilson, R.E.; Pessoa, J.R.; Assis, J.T. de; Dominguez, D.S.; Dias, L.A.; Santana, M. R.

    2016-01-01

    This paper proposes the use of image processing technologies to analyze hardened concrete samples obtained from table scanner and micro tomography. Techniques will be used to obtain numerical data on the distribution and geometry of aggregates and pores of the concrete, as well as their relative position. It is expected that the data obtained can produce information on the research of concrete pathologies such as AAR, and the freeze / thaw process. (author)

  7. Ultrasonic scanner for stainless steel weld inspections. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Kupperman, D. S.; Reimann, K. J.

    1978-09-01

    The large grain size and anisotropic nature of stainless steel weld metal make conventional ultrasonic testing very difficult. A technique is evaluated for minimizing the coherent ultrasonic noise in stainless steel weld metal. The method involves digitizing conventional ''A-scan'' traces and averaging them with a minicomputer. Results are presented for an ultrasonic scanner which interrogates a small volume of the weld metal while averaging the coherent ultrasonic noise.

  8. Experimental characterization of the Clear-PEM scanner spectrometric performance

    Energy Technology Data Exchange (ETDEWEB)

    Bugalho, R; Carrico, B; Ferreira, C S; Frade, M; Ferreira, M; Moura, R; Ortigao, C; Pinheiro, J F; Rodrigues, P; Rolo, I; Silva, J C; Trindade, A; Varela, J [Laboratorio de Instrumentacao e Fisica Experimental de Particulas (LIP), Av. Elias Garcia 14-1, 1000-149 Lisboa (Portugal)], E-mail: frade@lip.pt

    2009-10-15

    In the framework of the Clear-PEM project for the construction of a high-resolution and high-specificity scanner for breast cancer imaging, a Positron Emission Mammography tomograph has been developed and installed at the Instituto Portugues de Oncologia do Porto hospital. The Clear-PEM scanner is mainly composed by two planar detector heads attached to a robotic arm, trigger/data acquisition electronics system and computing servers. The detector heads hold crystal matrices built from 2 x 2 x 20 mm{sup 3} LYSO:Ce crystals readout by Hamamatsu S8550 APD arrays. The APDs are optically coupled to both ends of the 6144 crystals in order to extract the DOI information for each detected event. Each one of 12288 APD's pixels is read and controlled by Application Specific Integrated Circuits water-cooled by an external cooling unit. The Clear-PEM frontend boards innovative design results in a unprecedented integration of the crystal matrices, APDs and ASICs, making Clear-PEM the PET scanner with the highest number of APD pixels ever integrated so far. In this paper, the scanner's main technical characteristics, calibration strategies and the first spectrometric performance evaluation in a clinical environment are presented. The first commissioning results show 99.7% active channels, which, after calibration, have inter-pixel and absolute gain distributions with dispersions of, respectively, 12.2% and 15.3%, demonstrating that despite the large number of channels, the system is uniform. The mean energy resolution at 511 keV is of 15.9%, with a 8.8% dispersion, and the mean C{sub DOI}{sup -1} is 5.9%/mm, with a 7.8% dispersion. The coincidence time resolution, at 511 keV, for a energy window between 400 and 600 keV, is 5.2 ns FWHM.

  9. High efficiency conical scanner for earth resources applications

    Science.gov (United States)

    Bates, J. C.; Dumas, H. J., Jr.

    1975-01-01

    A description is given of a six-arm conical scanner which was selected to provide a continuous line-of-sight scan. Two versions of the instrument are considered. The two versions differ in their weight. The weight of the heavy version is 600 lbs. A light weight design which employs beryllium and aluminum optical components weighs only 350 lbs. A multiplexer and analog-to-digital converter are to be incorporated into the design. Questions of instrument performance are also discussed.

  10. Design of a laser scanner for a digital mammography system.

    Science.gov (United States)

    Rowlands, J A; Taylor, J E

    1996-05-01

    We have developed a digital readout system for radiographic images using a scanning laser beam. In this system, electrostatic charge images on amorphous selenium (alpha-Se) plates are read out using photo-induced discharge (PID). We discuss the design requirements of a laser scanner for the PID system and describe its construction from commercially available components. The principles demonstrated can be adapted to a variety of digital imaging systems.

  11. Accuracy of five intraoral scanners compared to indirect digitalization.

    Science.gov (United States)

    Güth, Jan-Frederik; Runkel, Cornelius; Beuer, Florian; Stimmelmayr, Michael; Edelhoff, Daniel; Keul, Christine

    2017-06-01

    Direct and indirect digitalization offer two options for computer-aided design (CAD)/ computer-aided manufacturing (CAM)-generated restorations. The aim of this study was to evaluate the accuracy of different intraoral scanners and compare them to the process of indirect digitalization. A titanium testing model was directly digitized 12 times with each intraoral scanner: (1) CS 3500 (CS), (2) Zfx Intrascan (ZFX), (3) CEREC AC Bluecam (BLU), (4) CEREC AC Omnicam (OC) and (5) True Definition (TD). As control, 12 polyether impressions were taken and the referring plaster casts were digitized indirectly with the D-810 laboratory scanner (CON). The accuracy (trueness/precision) of the datasets was evaluated by an analysing software (Geomagic Qualify 12.1) using a "best fit alignment" of the datasets with a highly accurate reference dataset of the testing model, received from industrial computed tomography. Direct digitalization using the TD showed the significant highest overall "trueness", followed by CS. Both performed better than CON. BLU, ZFX and OC showed higher differences from the reference dataset than CON. Regarding the overall "precision", the CS 3500 intraoral scanner and the True Definition showed the best performance. CON, BLU and OC resulted in significantly higher precision than ZFX did. Within the limitations of this in vitro study, the accuracy of the ascertained datasets was dependent on the scanning system. The direct digitalization was not superior to indirect digitalization for all tested systems. Regarding the accuracy, all tested intraoral scanning technologies seem to be able to reproduce a single quadrant within clinical acceptable accuracy. However, differences were detected between the tested systems.

  12. Wire scanner data analysis for the SSC Linac emittance measurement

    International Nuclear Information System (INIS)

    Yao, C.Y.; Hurd, J.W.; Sage, J.

    1993-07-01

    The wire scanners are designed in the SSC Linac for measurement of beam emittance at various locations. In order to obtain beam parameters from the scan signal, a data analysis program was developed that considers the problems of noise reduction, machine modeling, parameter fitting, and correction. This program is intended as a tool for Linac commissioning and also as part of the Linac control program. Some of the results from commissioning runs are presented

  13. Determining the surface roughness coefficient by 3D Scanner

    Directory of Open Access Journals (Sweden)

    Karmen Fifer Bizjak

    2010-12-01

    Full Text Available Currently, several test methods can be used in the laboratory to determine the roughness of rock joint surfaces.However, true roughness can be distorted and underestimated by the differences in the sampling interval of themeasurement methods. Thus, these measurement methods produce a dead zone and distorted roughness profiles.In this paper a new rock joint surface roughness measurement method is presented, with the use of a camera-typethree-dimensional (3D scanner as an alternative to current methods. For this study, the surfaces of ten samples oftuff were digitized by means of a 3D scanner, and the results were compared with the corresponding Rock JointCoefficient (JRC values. Up until now such 3D scanner have been mostly used in the automotive industry, whereastheir use for comparison with obtained JRC coefficient values in rock mechanics is presented here for the first time.The proposed new method is a faster, more precise and more accurate than other existing test methods, and is apromising technique for use in this area of study in the future.

  14. Quality assurance of computed tomography scanner beams in diagnostic radiology

    International Nuclear Information System (INIS)

    Lindskoug, B.A.

    1989-01-01

    The number of computed tomography (CT) scanners in diagnostic radiology is increasing, to the extent that they are now found in relatively small hospitals. These hospitals do not have local physicists available and so methods must be developed to allow quality assurance to be carried out at distant laboratories. Several different types of solid water phantoms are available with various built-in test objects that may supply sufficient information about the many parameters that must be checked. The dose distributions, however, are usually not so well considered, although the connection between image quality and absorbed dose must be known for optimal use of a CT scanner. By introducing thermoluminescent dosemeters (TLDs) into a commercial phantom (RMI), it was possible to measure the absorbed dose profile and the line integral of the absorbed dose across the slit. The computer-guided readout of the TLDs gives the absorbed dose, the average dose and half maximum width, absorbed dose curve, and also the line integral of the peak. The only modification of the phantom was five holes, drilled at strategic positions, that did not influence the built-in test objects. This single measurement provides an appropriate monthly quality assurance check of the CT scanner with little extra effort. (author)

  15. Erosion can't hide from laser scanner

    International Nuclear Information System (INIS)

    Konstant, D.A.

    1991-01-01

    Particles of topsoil blown by wind will bounce along the soil surface and finally escape a field, leaving it less able to support crops. Water will wash away valuable topsoil and nutrients. And how rough the soil surface is influences whether the soil will erode. Until now, soil scientists have had no suitable technique to measure soil roughness - or microtopography - on the small scale. ARS soil scientists Joe M. Bradford and Chi-hua Huang, of the National Soil Erosion Research Laboratory in West Lafayette, Indiana, have developed a portable scanner that can. It measures the tiny ridges left in the soil by tilling or clods of soil particles that clump together naturally. What does the scanner do? It measures soil elevation by shining a low-power laser beam onto the surface and detecting the position of the laser spot reflected from the soil with a 35-mm camera. In place of film, the scanner camera uses electronic circuitry somewhat similar to that in a video camera to transmit the spot's position to a small computer about 30,000 times a minute. The laser and camera are mounted on the frame of a motor-driven carriage. The computer controls the carriage movement. At the end of a scan, a microtopographic map is stored in the computer. Scientists can analyze it immediately and can compare it to previous maps to see whether erosion has occurred

  16. Advanced optical 3D scanners using DMD technology

    Science.gov (United States)

    Muenstermann, P.; Godding, R.; Hermstein, M.

    2017-02-01

    Optical 3D measurement techniques are state-of-the-art for highly precise, non-contact surface scanners - not only in industrial development, but also in near-production and even in-line configurations. The need for automated systems with very high accuracy and clear implementation of national precision standards is growing extremely due to expanding international quality guidelines, increasing production transparency and new concepts related to the demands of the fourth industrial revolution. The presentation gives an overview about the present technical concepts for optical 3D scanners and their benefit for customers and various different applications - not only in quality control, but also in design centers or in medical applications. The advantages of DMD-based systems will be discussed and compared to other approaches. Looking at today's 3D scanner market, there is a confusing amount of solutions varying from lowprice solutions to high end systems. Many of them are linked to a very special target group or to special applications. The article will clarify the differences of the approaches and will discuss some key features which are necessary to render optical measurement systems suitable for industrial environments. The paper will be completed by examples for DMDbased systems, e. g. RGB true-color systems with very high accuracy like the StereoScan neo of AICON 3D Systems. Typical applications and the benefits for customers using such systems are described.

  17. [Prediction of Encapsulation Temperatures of Copolymer Films in Photovoltaic Cells Using Hyperspectral Imaging Techniques and Chemometrics].

    Science.gov (United States)

    Lin, Ping; Chen, Yong-ming; Yao, Zhi-lei

    2015-11-01

    A novel method of combination of the chemometrics and the hyperspectral imaging techniques was presented to detect the temperatures of Ethylene-Vinyl Acetate copolymer (EVA) films in photovoltaic cells during the thermal encapsulation process. Four varieties of the EVA films which had been heated at the temperatures of 128, 132, 142 and 148 °C during the photovoltaic cells production process were used for investigation in this paper. These copolymer encapsulation films were firstly scanned by the hyperspectral imaging equipment (Spectral Imaging Ltd. Oulu, Finland). The scanning band range of hyperspectral equipemnt was set between 904.58 and 1700.01 nm. The hyperspectral dataset of copolymer films was randomly divided into two parts for the training and test purpose. Each type of the training set and test set contained 90 and 10 instances, respectively. The obtained hyperspectral images of EVA films were dealt with by using the ENVI (Exelis Visual Information Solutions, USA) software. The size of region of interest (ROI) of each obtained hyperspectral image of EVA film was set as 150 x 150 pixels. The average of reflectance hyper spectra of all the pixels in the ROI was used as the characteristic curve to represent the instance. There kinds of chemometrics methods including partial least squares regression (PLSR), multi-class support vector machine (SVM) and large margin nearest neighbor (LMNN) were used to correlate the characteristic hyper spectra with the encapsulation temperatures of of copolymer films. The plot of weighted regression coefficients illustrated that both bands of short- and long-wave near infrared hyperspectral data contributed to enhancing the prediction accuracy of the forecast model. Because the attained reflectance hyperspectral data of EVA materials displayed the strong nonlinearity, the prediction performance of linear modeling method of PLSR declined and the prediction precision only reached to 95%. The kernel-based forecast models were

  18. Sensitivity and fidelity of DNA microarray improved with integration of Amplified Differential Gene Expression (ADGE

    Directory of Open Access Journals (Sweden)

    Ile Kristina E

    2003-07-01

    Full Text Available Abstract Background The ADGE technique is a method designed to magnify the ratios of gene expression before detection. It improves the detection sensitivity to small change of gene expression and requires small amount of starting material. However, the throughput of ADGE is low. We integrated ADGE with DNA microarray (ADGE microarray and compared it with regular microarray. Results When ADGE was integrated with DNA microarray, a quantitative relationship of a power function between detected and input ratios was found. Because of ratio magnification, ADGE microarray was better able to detect small changes in gene expression in a drug resistant model cell line system. The PCR amplification of templates and efficient labeling reduced the requirement of starting material to as little as 125 ng of total RNA for one slide hybridization and enhanced the signal intensity. Integration of ratio magnification, template amplification and efficient labeling in ADGE microarray reduced artifacts in microarray data and improved detection fidelity. The results of ADGE microarray were less variable and more reproducible than those of regular microarray. A gene expression profile generated with ADGE microarray characterized the drug resistant phenotype, particularly with reference to glutathione, proliferation and kinase pathways. Conclusion ADGE microarray magnified the ratios of differential gene expression in a power function, improved the detection sensitivity and fidelity and reduced the requirement for starting material while maintaining high throughput. ADGE microarray generated a more informative expression pattern than regular microarray.

  19. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    Science.gov (United States)

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We

  20. Characterisation and correction of signal fluctuations in successive acquisitions of microarray images

    Directory of Open Access Journals (Sweden)

    François Nicolas

    2009-03-01

    Full Text Available Abstract Background There are many sources of variation in dual labelled microarray experiments, including data acquisition and image processing. The final interpretation of experiments strongly relies on the accuracy of the measurement of the signal intensity. For low intensity spots in particular, accurately estimating gene expression variations remains a challenge as signal measurement is, in this case, highly subject to fluctuations. Results To evaluate the fluctuations in the fluorescence intensities of spots, we used series of successive scans, at the same settings, of whole genome arrays. We measured the decrease in fluorescence and we evaluated the influence of different parameters (PMT gain, resolution and chemistry of the slide on the signal variability, at the level of the array as a whole and by intensity interval. Moreover, we assessed the effect of averaging scans on the fluctuations. We found that the extent of photo-bleaching was low and we established that 1 the fluorescence fluctuation is linked to the resolution e.g. it depends on the number of pixels in the spot 2 the fluorescence fluctuation increases as the scanner voltage increases and, moreover, is higher for the red as opposed to the green fluorescence which can introduce bias in the analysis 3 the signal variability is linked to the intensity level, it is higher for low intensities 4 the heterogeneity of the spots and the variability of the signal and the intensity ratios decrease when two or three scans are averaged. Conclusion Protocols consisting of two scans, one at low and one at high PMT gains, or multiple scans (ten scans can introduce bias or be difficult to implement. We found that averaging two, or at most three, acquisitions of microarrays scanned at moderate photomultiplier settings (PMT gain is sufficient to significantly improve the accuracy (quality of the data and particularly those for spots having low intensities and we propose this as a general