WorldWideScience

Sample records for satellite image classification

  1. Classification of high resolution satellite images

    OpenAIRE

    Karlsson, Anders

    2003-01-01

    In this thesis the Support Vector Machine (SVM)is applied on classification of high resolution satellite images. Sveral different measures for classification, including texture mesasures, 1st order statistics, and simple contextual information were evaluated. Additionnally, the image was segmented, using an enhanced watershed method, in order to improve the classification accuracy.

  2. Virtual Satellite Construction and Application for Image Classification

    International Nuclear Information System (INIS)

    Su, W G; Su, F Z; Zhou, C H

    2014-01-01

    Nowadays, most remote sensing image classification uses single satellite remote sensing data, so the number of bands and band spectral width is consistent. In addition, observed phenomenon such as land cover have the same spectral signature, which causes the classification accuracy to decrease as different data have unique characteristic. Therefore, this paper analyzes different optical remote sensing satellites, comparing the spectral differences and proposes the ideas and methods to build a virtual satellite. This article illustrates the research on the TM, HJ-1 and MODIS data. We obtained the virtual band X 0 through these satellites' bands combined it with the 4 bands of a TM image to build a virtual satellite with five bands. Based on this, we used these data for image classification. The experimental results showed that the virtual satellite classification results of building land and water information were superior to the HJ-1 and TM data respectively

  3. Satellite Image Classification of Building Damages Using Airborne and Satellite Image Samples in a Deep Learning Approach

    Science.gov (United States)

    Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.

    2018-05-01

    The localization and detailed assessment of damaged buildings after a disastrous event is of utmost importance to guide response operations, recovery tasks or for insurance purposes. Several remote sensing platforms and sensors are currently used for the manual detection of building damages. However, there is an overall interest in the use of automated methods to perform this task, regardless of the used platform. Owing to its synoptic coverage and predictable availability, satellite imagery is currently used as input for the identification of building damages by the International Charter, as well as the Copernicus Emergency Management Service for the production of damage grading and reference maps. Recently proposed methods to perform image classification of building damages rely on convolutional neural networks (CNN). These are usually trained with only satellite image samples in a binary classification problem, however the number of samples derived from these images is often limited, affecting the quality of the classification results. The use of up/down-sampling image samples during the training of a CNN, has demonstrated to improve several image recognition tasks in remote sensing. However, it is currently unclear if this multi resolution information can also be captured from images with different spatial resolutions like satellite and airborne imagery (from both manned and unmanned platforms). In this paper, a CNN framework using residual connections and dilated convolutions is used considering both manned and unmanned aerial image samples to perform the satellite image classification of building damages. Three network configurations, trained with multi-resolution image samples are compared against two benchmark networks where only satellite image samples are used. Combining feature maps generated from airborne and satellite image samples, and refining these using only the satellite image samples, improved nearly 4 % the overall satellite image

  4. AUTOMATIC APPROACH TO VHR SATELLITE IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Kupidura

    2016-06-01

    Full Text Available In this paper, we present a proposition of a fully automatic classification of VHR satellite images. Unlike the most widespread approaches: supervised classification, which requires prior defining of class signatures, or unsupervised classification, which must be followed by an interpretation of its results, the proposed method requires no human intervention except for the setting of the initial parameters. The presented approach bases on both spectral and textural analysis of the image and consists of 3 steps. The first step, the analysis of spectral data, relies on NDVI values. Its purpose is to distinguish between basic classes, such as water, vegetation and non-vegetation, which all differ significantly spectrally, thus they can be easily extracted basing on spectral analysis. The second step relies on granulometric maps. These are the product of local granulometric analysis of an image and present information on the texture of each pixel neighbourhood, depending on the texture grain. The purpose of texture analysis is to distinguish between different classes, spectrally similar, but yet of different texture, e.g. bare soil from a built-up area, or low vegetation from a wooded area. Due to the use of granulometric analysis, based on mathematical morphology opening and closing, the results are resistant to the border effect (qualifying borders of objects in an image as spaces of high texture, which affect other methods of texture analysis like GLCM statistics or fractal analysis. Therefore, the effectiveness of the analysis is relatively high. Several indices based on values of different granulometric maps have been developed to simplify the extraction of classes of different texture. The third and final step of the process relies on a vegetation index, based on near infrared and blue bands. Its purpose is to correct partially misclassified pixels. All the indices used in the classification model developed relate to reflectance values, so the

  5. Recurrent Neural Networks to Correct Satellite Image Classification Maps

    Science.gov (United States)

    Maggiori, Emmanuel; Charpiat, Guillaume; Tarabalka, Yuliya; Alliez, Pierre

    2017-09-01

    While initially devised for image categorization, convolutional neural networks (CNNs) are being increasingly used for the pixelwise semantic labeling of images. However, the proper nature of the most common CNN architectures makes them good at recognizing but poor at localizing objects precisely. This problem is magnified in the context of aerial and satellite image labeling, where a spatially fine object outlining is of paramount importance. Different iterative enhancement algorithms have been presented in the literature to progressively improve the coarse CNN outputs, seeking to sharpen object boundaries around real image edges. However, one must carefully design, choose and tune such algorithms. Instead, our goal is to directly learn the iterative process itself. For this, we formulate a generic iterative enhancement process inspired from partial differential equations, and observe that it can be expressed as a recurrent neural network (RNN). Consequently, we train such a network from manually labeled data for our enhancement task. In a series of experiments we show that our RNN effectively learns an iterative process that significantly improves the quality of satellite image classification maps.

  6. Rule-based land cover classification from very high-resolution satellite image with multiresolution segmentation

    Science.gov (United States)

    Haque, Md. Enamul; Al-Ramadan, Baqer; Johnson, Brian A.

    2016-07-01

    Multiresolution segmentation and rule-based classification techniques are used to classify objects from very high-resolution satellite images of urban areas. Custom rules are developed using different spectral, geometric, and textural features with five scale parameters, which exploit varying classification accuracy. Principal component analysis is used to select the most important features out of a total of 207 different features. In particular, seven different object types are considered for classification. The overall classification accuracy achieved for the rule-based method is 95.55% and 98.95% for seven and five classes, respectively. Other classifiers that are not using rules perform at 84.17% and 97.3% accuracy for seven and five classes, respectively. The results exploit coarse segmentation for higher scale parameter and fine segmentation for lower scale parameter. The major contribution of this research is the development of rule sets and the identification of major features for satellite image classification where the rule sets are transferable and the parameters are tunable for different types of imagery. Additionally, the individual objectwise classification and principal component analysis help to identify the required object from an arbitrary number of objects within images given ground truth data for the training.

  7. A new tool for supervised classification of satellite images available on web servers: Google Maps as a case study

    Science.gov (United States)

    García-Flores, Agustín.; Paz-Gallardo, Abel; Plaza, Antonio; Li, Jun

    2016-10-01

    This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.

  8. A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification

    Directory of Open Access Journals (Sweden)

    Huai Yu

    2016-03-01

    Full Text Available Scene classification plays an important role in understanding high-resolution satellite (HRS remotely sensed imagery. For remotely sensed scenes, both color information and texture information provide the discriminative ability in classification tasks. In recent years, substantial performance gains in HRS image classification have been reported in the literature. One branch of research combines multiple complementary features based on various aspects such as texture, color and structure. Two methods are commonly used to combine these features: early fusion and late fusion. In this paper, we propose combining the two methods under a tree of regions and present a new descriptor to encode color, texture and structure features using a hierarchical structure-Color Binary Partition Tree (CBPT, which we call the CTS descriptor. Specifically, we first build the hierarchical representation of HRS imagery using the CBPT. Then we quantize the texture and color features of dense regions. Next, we analyze and extract the co-occurrence patterns of regions based on the hierarchical structure. Finally, we encode local descriptors to obtain the final CTS descriptor and test its discriminative capability using object categorization and scene classification with HRS images. The proposed descriptor contains the spectral, textural and structural information of the HRS imagery and is also robust to changes in illuminant color, scale, orientation and contrast. The experimental results demonstrate that the proposed CTS descriptor achieves competitive classification results compared with state-of-the-art algorithms.

  9. A new web-based system for unsupervised classification of satellite images from the Google Maps engine

    Science.gov (United States)

    Ferrán, Ángel; Bernabé, Sergio; García-Rodríguez, Pablo; Plaza, Antonio

    2012-10-01

    In this paper, we develop a new web-based system for unsupervised classification of satellite images available from the Google Maps engine. The system has been developed using the Google Maps API and incorporates functionalities such as unsupervised classification of image portions selected by the user (at the desired zoom level). For this purpose, we use a processing chain made up of the well-known ISODATA and k-means algorithms, followed by spatial post-processing based on majority voting. The system is currently hosted on a high performance server which performs the execution of classification algorithms and returns the obtained classification results in a very efficient way. The previous functionalities are necessary to use efficient techniques for the classification of images and the incorporation of content-based image retrieval (CBIR). Several experimental validation types of the classification results with the proposed system are performed by comparing the classification accuracy of the proposed chain by means of techniques available in the well-known Environment for Visualizing Images (ENVI) software package. The server has access to a cluster of commodity graphics processing units (GPUs), hence in future work we plan to perform the processing in parallel by taking advantage of the cluster.

  10. Classification of Pansharpened Urban Satellite Images

    DEFF Research Database (Denmark)

    Palsson, Frosti; Sveinsson, Johannes R.; Benediktsson, Jon Atli

    2012-01-01

    The classification of high resolution urban remote sensing imagery is addressed with the focus on classification of imagery that has been pansharpened by a number of different pansharpening methods. The pansharpening process introduces some spectral and spatial distortions in the resulting fused...... multispectral image, the amount of which highly varies depending on which pansharpening technique is used. In the majority of the pansharpening techniques that have been proposed, there is a compromise between the spatial enhancement and the spectral consistency. Here we study the effects of the spectral...... information from the panchromatic data. Random Forests (RF) and Support Vector Machines (SVM) will be used as classifiers. Experiments are done for three different datasets that have been obtained by two different imaging sensors, IKONOS and QuickBird. These sensors deliver multispectral images that have four...

  11. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2016-12-01

    Full Text Available Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC, atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency.

  12. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation

    Science.gov (United States)

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-01-01

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency. PMID:27999261

  13. Classification Of Cluster Area Forsatellite Image

    Directory of Open Access Journals (Sweden)

    Thwe Zin Phyo

    2015-06-01

    Full Text Available Abstract This paper describes area classification for Landsat7 satellite image. The main purpose of this system is to classify the area of each cluster contained in a satellite image. To classify this image firstly need to clusterthe satellite image into different land cover types. Clustering is an unsupervised learning method that aimsto classify an image into homogeneous regions. This system is implemented based on color features with K-means clustering unsupervised algorithm. This method does not need to train image before clustering.The clusters of satellite image are grouped into a set of three clusters for Landsat7 satellite image. For this work the combined band 432 from Landsat7 satellite is used as an input. Satellite imageMandalay area in 2001 is chosen to test the segmentation method. After clustering a specific range for three clustered images must be defined in order to obtain greenland water and urbanbalance.This system is implemented by using MATLAB programming language.

  14. Classification of semiurban landscapes from very high-resolution satellite images using a regionalized multiscale segmentation approach

    Science.gov (United States)

    Kavzoglu, Taskin; Erdemir, Merve Yildiz; Tonbul, Hasan

    2017-07-01

    In object-based image analysis, obtaining representative image objects is an important prerequisite for a successful image classification. The major threat is the issue of scale selection due to the complex spatial structure of landscapes portrayed as an image. This study proposes a two-stage approach to conduct regionalized multiscale segmentation. In the first stage, an initial high-level segmentation is applied through a "broadscale," and a set of image objects characterizing natural borders of the landscape features are extracted. Contiguous objects are then merged to create regions by considering their normalized difference vegetation index resemblance. In the second stage, optimal scale values are estimated for the extracted regions, and multiresolution segmentation is applied with these settings. Two satellite images with different spatial and spectral resolutions were utilized to test the effectiveness of the proposed approach and its transferability to different geographical sites. Results were compared to those of image-based single-scale segmentation and it was found that the proposed approach outperformed the single-scale segmentations. Using the proposed methodology, significant improvement in terms of segmentation quality and classification accuracy (up to 5%) was achieved. In addition, the highest classification accuracies were produced using fine-scale values.

  15. Object-Based Classification of Grasslands from High Resolution Satellite Image Time Series Using Gaussian Mean Map Kernels

    Directory of Open Access Journals (Sweden)

    Mailys Lopes

    2017-07-01

    Full Text Available This paper deals with the classification of grasslands using high resolution satellite image time series. Grasslands considered in this work are semi-natural elements in fragmented landscapes, i.e., they are heterogeneous and small elements. The first contribution of this study is to account for grassland heterogeneity while working at the object level by modeling its pixels distributions by a Gaussian distribution. To measure the similarity between two grasslands, a new kernel is proposed as a second contribution: the α -Gaussian mean kernel. It allows one to weight the influence of the covariance matrix when comparing two Gaussian distributions. This kernel is introduced in support vector machines for the supervised classification of grasslands from southwest France. A dense intra-annual multispectral time series of the Formosat-2 satellite is used for the classification of grasslands’ management practices, while an inter-annual NDVI time series of Formosat-2 is used for old and young grasslands’ discrimination. Results are compared to other existing pixel- and object-based approaches in terms of classification accuracy and processing time. The proposed method is shown to be a good compromise between processing speed and classification accuracy. It can adapt to the classification constraints, and it encompasses several similarity measures known in the literature. It is appropriate for the classification of small and heterogeneous objects such as grasslands.

  16. Tree Species Classification in Temperate Forests Using Formosat-2 Satellite Image Time Series

    Directory of Open Access Journals (Sweden)

    David Sheeren

    2016-09-01

    Full Text Available Mapping forest composition is a major concern for forest management, biodiversity assessment and for understanding the potential impacts of climate change on tree species distribution. In this study, the suitability of a dense high spatial resolution multispectral Formosat-2 satellite image time-series (SITS to discriminate tree species in temperate forests is investigated. Based on a 17-date SITS acquired across one year, thirteen major tree species (8 broadleaves and 5 conifers are classified in a study area of southwest France. The performance of parametric (GMM and nonparametric (k-NN, RF, SVM methods are compared at three class hierarchy levels for different versions of the SITS: (i a smoothed noise-free version based on the Whittaker smoother; (ii a non-smoothed cloudy version including all the dates; (iii a non-smoothed noise-free version including only 14 dates. Noise refers to pixels contaminated by clouds and cloud shadows. The results of the 108 distinct classifications show a very high suitability of the SITS to identify the forest tree species based on phenological differences (average κ = 0 . 93 estimated by cross-validation based on 1235 field-collected plots. SVM is found to be the best classifier with very close results from the other classifiers. No clear benefit of removing noise by smoothing can be observed. Classification accuracy is even improved using the non-smoothed cloudy version of the SITS compared to the 14 cloud-free image time series. However conclusions of the results need to be considered with caution because of possible overfitting. Disagreements also appear between the maps produced by the classifiers for complex mixed forests, suggesting a higher classification uncertainty in these contexts. Our findings suggest that time-series data can be a good alternative to hyperspectral data for mapping forest types. It also demonstrates the potential contribution of the recently launched Sentinel-2 satellite for

  17. Vegetation classification and quatification by satellite image processing. A case study in north Portugal

    Energy Technology Data Exchange (ETDEWEB)

    Aranha, J.T. [Dept. Florestal, UTAD, 5001-801 Vila Real (Portugal); Viana, H.F. [Instituto Politecnico de Viseu, Escola Superior Agraria, Viseu (Portugal); Rodrigues, R. [Bioflag - Consulting - Santo Tirso (Portugal)

    2008-07-01

    The expected increase in Forest Biomass demand for energy production leads to derive expeditious and non-expensive techniques in order to classify vegetal land cover and evaluate the available biomass like to be harvested. Satellite image processing and classification, combined to field work, is a suitable tool to achieve these aims. A vegetation index (NDVI) was created by means of a Landsat TM image, from 2006, manipulation, in order to create a general vegetation map. Then, the same image was submitted to a supervised classification process in order to produce a land cover map (overall accuracy of 85%). In a second stage, they were collected NDVI values for each sampling plot, in order to update the database previous developed with data collected within forestry stands and shrubland. This data merging enabled to transform general vegetation map into available biomass within forestry stands and shrubland. The results showed a range of values from 0.25 up to 6.00 dry ton./ha for recent and former burnt areas recovered by Pinus pinaster (maritime pine) young trees and from 2.00 up to 9.00 dry ton./ha for recent and former burnt areas recovered by shrubs (e.g. genista or broom).

  18. Land use/cover classification in the Brazilian Amazon using satellite images.

    Science.gov (United States)

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  19. Neural network multispectral satellite images classification of volcanic ash plumes in a cloudy scenario

    Directory of Open Access Journals (Sweden)

    Matteo Picchiani

    2015-03-01

    Full Text Available This work shows the potential use of neural networks in the characterization of eruptive events monitored by satellite, through fast and automatic classification of multispectral images. The algorithm has been developed for the MODIS instrument and can easily be extended to other similar sensors. Six classes have been defined paying particular attention to image regions that represent the different surfaces that could possibly be found under volcanic ash clouds. Complex cloudy scenarios composed by images collected during the Icelandic eruptions of the Eyjafjallajökull (2010 and Grimsvötn (2011 volcanoes have been considered as test cases. A sensitivity analysis on the MODIS TIR and VIS channels has been performed to optimize the algorithm. The neural network has been trained with the first image of the dataset, while the remaining data have been considered as independent validation sets. Finally, the neural network classifier’s results have been compared with maps classified with several interactive procedures performed in a consolidated operational framework. This comparison shows that the automatic methodology proposed achieves a very promising performance, showing an overall accuracy greater than 84%, for the Eyjafjalla - jökull event, and equal to 74% for the Grimsvötn event. 

  20. Cloud detection, classification and motion estimation using geostationary satellite imagery for cloud cover forecast

    International Nuclear Information System (INIS)

    Escrig, H.; Batlles, F.J.; Alonso, J.; Baena, F.M.; Bosch, J.L.; Salbidegoitia, I.B.; Burgaleta, J.I.

    2013-01-01

    Considering that clouds are the greatest causes to solar radiation blocking, short term cloud forecasting can help power plant operation and therefore improve benefits. Cloud detection, classification and motion vector determination are key to forecasting sun obstruction by clouds. Geostationary satellites provide cloud information covering wide areas, allowing cloud forecast to be performed for several hours in advance. Herein, the methodology developed and tested in this study is based on multispectral tests and binary cross correlations followed by coherence and quality control tests over resulting motion vectors. Monthly synthetic surface albedo image and a method to reject erroneous correlation vectors were developed. Cloud classification in terms of opacity and height of cloud top is also performed. A whole-sky camera has been used for validation, showing over 85% of agreement between the camera and the satellite derived cloud cover, whereas error in motion vectors is below 15%. - Highlights: ► A methodology for detection, classification and movement of clouds is presented. ► METEOSAT satellite images are used to obtain a cloud mask. ► The prediction of cloudiness is estimated with 90% in overcast conditions. ► Results for partially covered sky conditions showed a 75% accuracy. ► Motion vectors are estimated from the clouds with a success probability of 86%

  1. Classification of high-resolution remote sensing images based on multi-scale superposition

    Science.gov (United States)

    Wang, Jinliang; Gao, Wenjie; Liu, Guangjie

    2017-07-01

    Landscape structures and process on different scale show different characteristics. In the study of specific target landmarks, the most appropriate scale for images can be attained by scale conversion, which improves the accuracy and efficiency of feature identification and classification. In this paper, the authors carried out experiments on multi-scale classification by taking the Shangri-la area in the north-western Yunnan province as the research area and the images from SPOT5 HRG and GF-1 Satellite as date sources. Firstly, the authors upscaled the two images by cubic convolution, and calculated the optimal scale for different objects on the earth shown in images by variation functions. Then the authors conducted multi-scale superposition classification on it by Maximum Likelyhood, and evaluated the classification accuracy. The results indicates that: (1) for most of the object on the earth, the optimal scale appears in the bigger scale instead of the original one. To be specific, water has the biggest optimal scale, i.e. around 25-30m; farmland, grassland, brushwood, roads, settlement places and woodland follows with 20-24m. The optimal scale for shades and flood land is basically as the same as the original one, i.e. 8m and 10m respectively. (2) Regarding the classification of the multi-scale superposed images, the overall accuracy of the ones from SPOT5 HRG and GF-1 Satellite is 12.84% and 14.76% higher than that of the original multi-spectral images, respectively, and Kappa coefficient is 0.1306 and 0.1419 higher, respectively. Hence, the multi-scale superposition classification which was applied in the research area can enhance the classification accuracy of remote sensing images .

  2. A method to incorporate uncertainty in the classification of remote sensing images

    OpenAIRE

    Gonçalves, Luísa M. S.; Fonte, Cidália C.; Júlio, Eduardo N. B. S.; Caetano, Mario

    2009-01-01

    The aim of this paper is to investigate if the incorporation of the uncertainty associated with the classification of surface elements into the classification of landscape units (LUs) increases the results accuracy. To this end, a hybrid classification method is developed, including uncertainty information in the classification of very high spatial resolution multi-spectral satellite images, to obtain a map of LUs. The developed classification methodology includes the following...

  3. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    Science.gov (United States)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  4. Incorporating Open Source Data for Bayesian Classification of Urban Land Use From VHR Stereo Images

    NARCIS (Netherlands)

    Li, Mengmeng; De Beurs, Kirsten M.; Stein, Alfred; Bijker, Wietske

    2017-01-01

    This study investigates the incorporation of open source data into a Bayesian classification of urban land use from very high resolution (VHR) stereo satellite images. The adopted classification framework starts from urban land cover classification, proceeds to building-type characterization, and

  5. Satellite image collection optimization

    Science.gov (United States)

    Martin, William

    2002-09-01

    Imaging satellite systems represent a high capital cost. Optimizing the collection of images is critical for both satisfying customer orders and building a sustainable satellite operations business. We describe the functions of an operational, multivariable, time dynamic optimization system that maximizes the daily collection of satellite images. A graphical user interface allows the operator to quickly see the results of what if adjustments to an image collection plan. Used for both long range planning and daily collection scheduling of Space Imaging's IKONOS satellite, the satellite control and tasking (SCT) software allows collection commands to be altered up to 10 min before upload to the satellite.

  6. Evaluation of Multiple Kernel Learning Algorithms for Crop Mapping Using Satellite Image Time-Series Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2017-09-01

    Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.

  7. Remote Sensing Image Classification Based on Stacked Denoising Autoencoder

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2017-12-01

    Full Text Available Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1 remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification.

  8. Mapping urban impervious surface using object-based image analysis with WorldView-3 satellite imagery

    Science.gov (United States)

    Iabchoon, Sanwit; Wongsai, Sangdao; Chankon, Kanoksuk

    2017-10-01

    Land use and land cover (LULC) data are important to monitor and assess environmental change. LULC classification using satellite images is a method widely used on a global and local scale. Especially, urban areas that have various LULC types are important components of the urban landscape and ecosystem. This study aims to classify urban LULC using WorldView-3 (WV-3) very high-spatial resolution satellite imagery and the object-based image analysis method. A decision rules set was applied to classify the WV-3 images in Kathu subdistrict, Phuket province, Thailand. The main steps were as follows: (1) the image was ortho-rectified with ground control points and using the digital elevation model, (2) multiscale image segmentation was applied to divide the image pixel level into image object level, (3) development of the decision ruleset for LULC classification using spectral bands, spectral indices, spatial and contextual information, and (4) accuracy assessment was computed using testing data, which sampled by statistical random sampling. The results show that seven LULC classes (water, vegetation, open space, road, residential, building, and bare soil) were successfully classified with overall classification accuracy of 94.14% and a kappa coefficient of 92.91%.

  9. A hierarchical approach of hybrid image classification for land use and land cover mapping

    Directory of Open Access Journals (Sweden)

    Rahdari Vahid

    2018-01-01

    Full Text Available Remote sensing data analysis can provide thematic maps describing land-use and land-cover (LULC in a short period. Using proper image classification method in an area, is important to overcome the possible limitations of satellite imageries for producing land-use and land-cover maps. In the present study, a hierarchical hybrid image classification method was used to produce LULC maps using Landsat Thematic mapper TM for the year of 1998 and operational land imager OLI for the year of 2016. Images were classified using the proposed hybrid image classification method, vegetation cover crown percentage map from normalized difference vegetation index, Fisher supervised classification and object-based image classification methods. Accuracy assessment results showed that the hybrid classification method produced maps with total accuracy up to 84 percent with kappa statistic value 0.81. Results of this study showed that the proposed classification method worked better with OLI sensor than with TM. Although OLI has a higher radiometric resolution than TM, the produced LULC map using TM is almost accurate like OLI, which is because of LULC definitions and image classification methods used.

  10. GRANULOMETRIC MAPS FROM HIGH RESOLUTION SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    Catherine Mering

    2011-05-01

    Full Text Available A new method of land cover mapping from satellite images using granulometric analysis is presented here. Discontinuous landscapes such as steppian bushes of semi arid regions and recently growing urban settlements are especially concerned by this study. Spatial organisations of the land cover are quantified by means of the size distribution analysis of the land cover units extracted from high resolution remotely sensed images. A granulometric map is built by automatic classification of every pixel of the image according to the granulometric density inside a sliding neighbourhood. Granulometric mapping brings some advantages over traditional thematic mapping by remote sensing by focusing on fine spatial events and small changes in one peculiar category of the landscape.

  11. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    Science.gov (United States)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  12. Land Cover Classification Using Integrated Spectral, Temporal, and Spatial Features Derived from Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Yongguang Zhai

    2018-03-01

    Full Text Available Obtaining accurate and timely land cover information is an important topic in many remote sensing applications. Using satellite image time series data should achieve high-accuracy land cover classification. However, most satellite image time-series classification methods do not fully exploit the available data for mining the effective features to identify different land cover types. Therefore, a classification method that can take full advantage of the rich information provided by time-series data to improve the accuracy of land cover classification is needed. In this paper, a novel method for time-series land cover classification using spectral, temporal, and spatial information at an annual scale was introduced. Based on all the available data from time-series remote sensing images, a refined nonlinear dimensionality reduction method was used to extract the spectral and temporal features, and a modified graph segmentation method was used to extract the spatial features. The proposed classification method was applied in three study areas with land cover complexity, including Illinois, South Dakota, and Texas. All the Landsat time series data in 2014 were used, and different study areas have different amounts of invalid data. A series of comparative experiments were conducted on the annual time-series images using training data generated from Cropland Data Layer. The results demonstrated higher overall and per-class classification accuracies and kappa index values using the proposed spectral-temporal-spatial method compared to spectral-temporal classification methods. We also discuss the implications of this study and possibilities for future applications and developments of the method.

  13. V-SIPAL - A VIRTUAL LABORATORY FOR SATELLITE IMAGE PROCESSING AND ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. M. Buddhiraju

    2012-09-01

    Full Text Available In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.

  14. APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN CLASSIFICATION OF HIGH RESOLUTION AGRICULTURAL REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    C. Yao

    2017-09-01

    Full Text Available With the rapid development of Precision Agriculture (PA promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN. For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  15. Feature extraction based on extended multi-attribute profiles and sparse autoencoder for remote sensing image classification

    Science.gov (United States)

    Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman

    2018-02-01

    The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.

  16. Analysis On Land Cover In Municipality Of Malang With Landsat 8 Image Through Unsupervised Classification

    Science.gov (United States)

    Nahari, R. V.; Alfita, R.

    2018-01-01

    Remote sensing technology has been widely used in the geographic information system in order to obtain data more quickly, accurately and affordably. One of the advantages of using remote sensing imagery (satellite imagery) is to analyze land cover and land use. Satellite image data used in this study were images from the Landsat 8 satellite combined with the data from the Municipality of Malang government. The satellite image was taken in July 2016. Furthermore, the method used in this study was unsupervised classification. Based on the analysis towards the satellite images and field observations, 29% of the land in the Municipality of Malang was plantation, 22% of the area was rice field, 12% was residential area, 10% was land with shrubs, and the remaining 2% was water (lake/reservoir). The shortcoming of the methods was 25% of the land in the area was unidentified because it was covered by cloud. It is expected that future researchers involve cloud removal processing to minimize unidentified area.

  17. LAKE ICE DETECTION IN LOW-RESOLUTION OPTICAL SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Tom

    2018-05-01

    Full Text Available Monitoring and analyzing the (decreasing trends in lake freezing provides important information for climate research. Multi-temporal satellite images are a natural data source to survey ice on lakes. In this paper, we describe a method for lake ice monitoring, which uses low spatial resolution (250 m–1000 m satellite images to determine whether a lake is frozen or not. We report results on four selected lakes in Switzerland: Sihl, Sils, Silvaplana and St. Moritz. These lakes have different properties regarding area, altitude, surrounding topography and freezing frequency, describing cases of medium to high difficulty. Digitized Open Street Map (OSM lake outlines are back-projected on to the image space after generalization. As a pre-processing step, the absolute geolocation error of the lake outlines is corrected by matching the projected outlines to the images. We define the lake ice detection as a two-class (frozen, non-frozen semantic segmentation problem. Several spectral channels of the multi-spectral satellite data are used, both reflective and emissive (thermal. Only the cloud-free (clean pixels which lie completely inside the lake are analyzed. The most useful channels to solve the problem are selected with xgboost and visual analysis of histograms of reference data, while the classification is done with non-linear support vector machine (SVM. We show experimentally that this straight-forward approach works well with both MODIS and VIIRS satellite imagery. Moreover, we show that the algorithm produces consistent results when tested on data from multiple winters.

  18. Lake Ice Detection in Low-Resolution Optical Satellite Images

    Science.gov (United States)

    Tom, M.; Kälin, U.; Sütterlin, M.; Baltsavias, E.; Schindler, K.

    2018-05-01

    Monitoring and analyzing the (decreasing) trends in lake freezing provides important information for climate research. Multi-temporal satellite images are a natural data source to survey ice on lakes. In this paper, we describe a method for lake ice monitoring, which uses low spatial resolution (250 m-1000 m) satellite images to determine whether a lake is frozen or not. We report results on four selected lakes in Switzerland: Sihl, Sils, Silvaplana and St. Moritz. These lakes have different properties regarding area, altitude, surrounding topography and freezing frequency, describing cases of medium to high difficulty. Digitized Open Street Map (OSM) lake outlines are back-projected on to the image space after generalization. As a pre-processing step, the absolute geolocation error of the lake outlines is corrected by matching the projected outlines to the images. We define the lake ice detection as a two-class (frozen, non-frozen) semantic segmentation problem. Several spectral channels of the multi-spectral satellite data are used, both reflective and emissive (thermal). Only the cloud-free (clean) pixels which lie completely inside the lake are analyzed. The most useful channels to solve the problem are selected with xgboost and visual analysis of histograms of reference data, while the classification is done with non-linear support vector machine (SVM). We show experimentally that this straight-forward approach works well with both MODIS and VIIRS satellite imagery. Moreover, we show that the algorithm produces consistent results when tested on data from multiple winters.

  19. A Novel Classification Technique of Landsat-8 OLI Image-Based Data Visualization: The Application of Andrews’ Plots and Fuzzy Evidential Reasoning

    Directory of Open Access Journals (Sweden)

    Sornkitja Boonprong

    2017-04-01

    Full Text Available Andrews first proposed an equation to visualize the structures within data in 1972. Since then, this equation has been used for data transformation and visualization in a wide variety of fields. However, it has yet to be applied to satellite image data. The effect of unwanted, or impure, pixels occurring in these data varies with their distribution in the image; the effect is greater if impurity pixels are included in a classifier’s training set. Andrews’ curves enable the interpreter to select outlier or impurity data that can be grouped into a new category for classification. This study overcomes the above-mentioned problem and illustrates the novelty of applying Andrews’ plots to satellite image data, and proposes a robust method for classifying the plots that combines Dempster-Shafer theory with fuzzy set theory. In addition, we present an example, obtained from real satellite images, to demonstrate the application of the proposed classification method. The accuracy and robustness of the proposed method are investigated for different training set sizes and crop types, and are compared with the results of two traditional classification methods. We find that outlier data are easily eliminated by examining Andrews’ curves and that the proposed method significantly outperforms traditional methods when considering the classification accuracy.

  20. Textural features for image classification

    Science.gov (United States)

    Haralick, R. M.; Dinstein, I.; Shanmugam, K.

    1973-01-01

    Description of some easily computable textural features based on gray-tone spatial dependances, and illustration of their application in category-identification tasks of three different kinds of image data - namely, photomicrographs of five kinds of sandstones, 1:20,000 panchromatic aerial photographs of eight land-use categories, and ERTS multispectral imagery containing several land-use categories. Two kinds of decision rules are used - one for which the decision regions are convex polyhedra (a piecewise-linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89% for the photomicrographs, 82% for the aerial photographic imagery, and 83% for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

  1. Simulation of seagrass bed mapping by satellite images based on the radiative transfer model

    Science.gov (United States)

    Sagawa, Tatsuyuki; Komatsu, Teruhisa

    2015-06-01

    Seagrass and seaweed beds play important roles in coastal marine ecosystems. They are food sources and habitats for many marine organisms, and influence the physical, chemical, and biological environment. They are sensitive to human impacts such as reclamation and pollution. Therefore, their management and preservation are necessary for a healthy coastal environment. Satellite remote sensing is a useful tool for mapping and monitoring seagrass beds. The efficiency of seagrass mapping, seagrass bed classification in particular, has been evaluated by mapping accuracy using an error matrix. However, mapping accuracies are influenced by coastal environments such as seawater transparency, bathymetry, and substrate type. Coastal management requires sufficient accuracy and an understanding of mapping limitations for monitoring coastal habitats including seagrass beds. Previous studies are mainly based on case studies in specific regions and seasons. Extensive data are required to generalise assessments of classification accuracy from case studies, which has proven difficult. This study aims to build a simulator based on a radiative transfer model to produce modelled satellite images and assess the visual detectability of seagrass beds under different transparencies and seagrass coverages, as well as to examine mapping limitations and classification accuracy. Our simulations led to the development of a model of water transparency and the mapping of depth limits and indicated the possibility for seagrass density mapping under certain ideal conditions. The results show that modelling satellite images is useful in evaluating the accuracy of classification and that establishing seagrass bed monitoring by remote sensing is a reliable tool.

  2. Cellular image classification

    CERN Document Server

    Xu, Xiang; Lin, Feng

    2017-01-01

    This book introduces new techniques for cellular image feature extraction, pattern recognition and classification. The authors use the antinuclear antibodies (ANAs) in patient serum as the subjects and the Indirect Immunofluorescence (IIF) technique as the imaging protocol to illustrate the applications of the described methods. Throughout the book, the authors provide evaluations for the proposed methods on two publicly available human epithelial (HEp-2) cell datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis. First, the reading of imaging results is significantly influenced by one’s qualification and reading systems, causing high intra- and inter-laboratory variance. The authors present a low-order LP21 fiber mode for optical single cell manipulation and imaging staining patterns of HEp-2 cells. A focused four-lobed mode distribution is stable and effective in optical...

  3. Shadow imaging of geosynchronous satellites

    Science.gov (United States)

    Douglas, Dennis Michael

    Geosynchronous (GEO) satellites are essential for modern communication networks. If communication to a GEO satellite is lost and a malfunction occurs upon orbit insertion such as a solar panel not deploying there is no direct way to observe it from Earth. Due to the GEO orbit distance of ~36,000 km from Earth's surface, the Rayleigh criteria dictates that a 14 m telescope is required to conventionally image a satellite with spatial resolution down to 1 m using visible light. Furthermore, a telescope larger than 30 m is required under ideal conditions to obtain spatial resolution down to 0.4 m. This dissertation evaluates a method for obtaining high spatial resolution images of GEO satellites from an Earth based system by measuring the irradiance distribution on the ground resulting from the occultation of the satellite passing in front of a star. The representative size of a GEO satellite combined with the orbital distance results in the ground shadow being consistent with a Fresnel diffraction pattern when observed at visible wavelengths. A measurement of the ground shadow irradiance is used as an amplitude constraint in a Gerchberg-Saxton phase retrieval algorithm that produces a reconstruction of the satellite's 2D transmission function which is analogous to a reverse contrast image of the satellite. The advantage of shadow imaging is that a terrestrial based redundant set of linearly distributed inexpensive small telescopes, each coupled to high speed detectors, is a more effective resolved imaging system for GEO satellites than a very large telescope under ideal conditions. Modeling and simulation efforts indicate sub-meter spatial resolution can be readily achieved using collection apertures of less than 1 meter in diameter. A mathematical basis is established for the treatment of the physical phenomena involved in the shadow imaging process. This includes the source star brightness and angular extent, and the diffraction of starlight from the satellite

  4. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  5. Decision tree approach for classification of remotely sensed satellite ...

    Indian Academy of Sciences (India)

    sensed satellite data using open source support. Richa Sharma .... Decision tree classification techniques have been .... the USGS Earth Resource Observation Systems. (EROS) ... for shallow water, 11% were for sparse and dense built-up ...

  6. Automated Detection of Buildings from Heterogeneous VHR Satellite Images for Rapid Response to Natural Disasters

    Directory of Open Access Journals (Sweden)

    Shaodan Li

    2017-11-01

    Full Text Available In this paper, we present a novel approach for automatically detecting buildings from multiple heterogeneous and uncalibrated very high-resolution (VHR satellite images for a rapid response to natural disasters. In the proposed method, a simple and efficient visual attention method is first used to extract built-up area candidates (BACs from each multispectral (MS satellite image. After this, morphological building indices (MBIs are extracted from all the masked panchromatic (PAN and MS images with BACs to characterize the structural features of buildings. Finally, buildings are automatically detected in a hierarchical probabilistic model by fusing the MBI and masked PAN images. The experimental results show that the proposed method is comparable to supervised classification methods in terms of recall, precision and F-value.

  7. Land cover and forest formation distributions for St. Kitts, Nevis, St. Eustatius, Grenada and Barbados from decision tree classification of cloud-cleared satellite imagery. Caribbean Journal of Science. 44(2):175-198.

    Science.gov (United States)

    E.H. Helmer; T.A. Kennaway; D.H. Pedreros; M.L. Clark; H. Marcano-Vega; L.L. Tieszen; S.R. Schill; C.M.S. Carrington

    2008-01-01

    Satellite image-based mapping of tropical forests is vital to conservation planning. Standard methods for automated image classification, however, limit classification detail in complex tropical landscapes. In this study, we test an approach to Landsat image interpretation on four islands of the Lesser Antilles, including Grenada and St. Kitts, Nevis and St. Eustatius...

  8. Decision tree approach for classification of remotely sensed satellite

    Indian Academy of Sciences (India)

    DTC) algorithm for classification of remotely sensed satellite data (Landsat TM) using open source support. The decision tree is constructed by recursively partitioning the spectral distribution of the training dataset using WEKA, open source ...

  9. Smoothing of Fused Spectral Consistent Satellite Images

    DEFF Research Database (Denmark)

    Sveinsson, Johannes; Aanæs, Henrik; Benediktsson, Jon Atli

    2006-01-01

    on satellite data. Additionally, most conventional methods are loosely connected to the image forming physics of the satellite image, giving these methods an ad hoc feel. Vesteinsson et al. (2005) proposed a method of fusion of satellite images that is based on the properties of imaging physics...

  10. Using texture analysis to improve per-pixel classification of very high resolution images for mapping plastic greenhouses

    Science.gov (United States)

    Agüera, Francisco; Aguilar, Fernando J.; Aguilar, Manuel A.

    The area occupied by plastic-covered greenhouses has undergone rapid growth in recent years, currently exceeding 500,000 ha worldwide. Due to the vast amount of input (water, fertilisers, fuel, etc.) required, and output of different agricultural wastes (vegetable, plastic, chemical, etc.), the environmental impact of this type of production system can be serious if not accompanied by sound and sustainable territorial planning. For this, the new generation of satellites which provide very high resolution imagery, such as QuickBird and IKONOS can be useful. In this study, one QuickBird and one IKONOS satellite image have been used to cover the same area under similar circumstances. The aim of this work was an exhaustive comparison of QuickBird vs. IKONOS images in land-cover detection. In terms of plastic greenhouse mapping, comparative tests were designed and implemented, each with separate objectives. Firstly, the Maximum Likelihood Classification (MLC) was applied using five different approaches combining R, G, B, NIR, and panchromatic bands. The combinations of the bands used, significantly influenced some of the indexes used to classify quality in this work. Furthermore, the quality classification of the QuickBird image was higher in all cases than that of the IKONOS image. Secondly, texture features derived from the panchromatic images at different window sizes and with different grey levels were added as a fifth band to the R, G, B, NIR images to carry out the MLC. The inclusion of texture information in the classification did not improve the classification quality. For classifications with texture information, the best accuracies were found in both images for mean and angular second moment texture parameters. The optimum window size in these texture parameters was 3×3 for IK images, while for QB images it depended on the quality index studied, but the optimum window size was around 15×15. With regard to the grey level, the optimum was 128. Thus, the

  11. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    Science.gov (United States)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  12. Crop classification based on multi-temporal satellite remote sensing data for agro-advisory services

    Science.gov (United States)

    Karale, Yogita; Mohite, Jayant; Jagyasi, Bhushan

    2014-11-01

    In this paper, we envision the use of satellite images coupled with GIS to obtain location specific crop type information in order to disseminate crop specific advises to the farmers. In our ongoing mKRISHI R project, the accurate information about the field level crop type and acreage will help in the agro-advisory services and supply chain planning and management. The key contribution of this paper is the field level crop classification using multi temporal images of Landsat-8 acquired during November 2013 to April 2014. The study area chosen is Vani, Maharashtra, India, from where the field level ground truth information for various crops such as grape, wheat, onion, soybean, tomato, along with fodder and fallow fields has been collected using the mobile application. The ground truth information includes crop type, crop stage and GPS location for 104 farms in the study area with approximate area of 42 hectares. The seven multi-temporal images of the Landsat-8 were used to compute the vegetation indices namely: Normalized Difference Vegetation Index (NDVI), Simple Ratio (SR) and Difference Vegetation Index (DVI) for the study area. The vegetation indices values of the pixels within a field were then averaged to obtain the field level vegetation indices. For each crop, binary classification has been carried out using the feed forward neural network operating on the field level vegetation indices. The classification accuracy for the individual crop was in the range of 74.5% to 97.5% and the overall classification accuracy was found to be 88.49%.

  13. AUTOMATIC CLOUD DETECTION FROM MULTI-TEMPORAL SATELLITE IMAGES: TOWARDS THE USE OF PLÉIADES TIME SERIES

    Directory of Open Access Journals (Sweden)

    N. Champion

    2012-08-01

    Full Text Available Contrary to aerial images, satellite images are often affected by the presence of clouds. Identifying and removing these clouds is one of the primary steps to perform when processing satellite images, as they may alter subsequent procedures such as atmospheric corrections, DSM production or land cover classification. The main goal of this paper is to present the cloud detection approach, developed at the French Mapping agency. Our approach is based on the availability of multi-temporal satellite images (i.e. time series that generally contain between 5 and 10 images and is based on a region-growing procedure. Seeds (corresponding to clouds are firstly extracted through a pixel-to-pixel comparison between the images contained in time series (the presence of a cloud is here assumed to be related to a high variation of reflectance between two images. Clouds are then delineated finely using a dedicated region-growing algorithm. The method, originally designed for panchromatic SPOT5-HRS images, is tested in this paper using time series with 9 multi-temporal satellite images. Our preliminary experiments show the good performances of our method. In a near future, the method will be applied to Pléiades images, acquired during the in-flight commissioning phase of the satellite (launched at the end of 2011. In that context, this is a particular goal of this paper to show to which extent and in which way our method can be adapted to this kind of imagery.

  14. Determination of the Impact of Urbanization on Agricultural Lands using Multi-temporal Satellite Sensor Images

    Science.gov (United States)

    Kaya, S.; Alganci, U.; Sertel, E.; Ustundag, B.

    2015-12-01

    Throughout the history, agricultural activities have been performed close to urban areas. Main reason behind this phenomenon is the need of fast marketing of the agricultural production to urban residents and financial provision. Thus, using the areas nearby cities for agricultural activities brings out advantage of easy transportation of productions and fast marketing. For decades, heavy migration to cities has directly and negatively affected natural grasslands, forests and agricultural lands. This pressure has caused agricultural lands to be changed into urban areas. Dense urbanization causes increase in impervious surfaces, heat islands and many other problems in addition to destruction of agricultural lands. Considering the negative impacts of urbanization on agricultural lands and natural resources, a periodic monitoring of these changes becomes indisputably important. At this point, satellite images are known to be good data sources for land cover / use change monitoring with their fast data acquisition, large area coverages and temporal resolution properties. Classification of the satellite images provides thematic the land cover / use maps of the earth surface and changes can be determined with GIS based analysis multi-temporal maps. In this study, effects of heavy urbanization over agricultural lands in Istanbul, metropolitan city of Turkey, were investigated with use of multi-temporal Landsat TM satellite images acquired between 1984 and 2011. Images were geometrically registered to each other and classified using supervised maximum likelihood classification algorithm. Resulting thematic maps were exported to GIS environment and destructed agricultural lands by urbanization were determined using spatial analysis.

  15. EVALUATING THE POTENTIAL OF SATELLITE HYPERSPECTRAL RESURS-P DATA FOR FOREST SPECIES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    O. Brovkina

    2016-06-01

    Full Text Available Satellite-based hyperspectral sensors provide spectroscopic information in relatively narrow contiguous spectral bands over a large area which can be useful in forestry applications. This study evaluates the potential of satellite hyperspectral Resurs-P data for forest species mapping. Firstly, a comparative study between top of canopy reflectance obtained from the Resurs-P, from the airborne hyperspectral scanner CASI and from field measurement (FieldSpec ASD 4 on selected vegetation cover types is conducted. Secondly, Resurs-P data is tested in classification and verification of different forest species compartments. The results demonstrate that satellite hyperspectral Resurs-P sensor can produce useful informational and show good performance for forest species classification comparable both with forestry map and classification from airborne CASI data, but also indicate that developments in pre-processing steps are still required to improve the mapping level.

  16. Automatic Hierarchical Color Image Classification

    Directory of Open Access Journals (Sweden)

    Jing Huang

    2003-02-01

    Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.

  17. Combined Use of Multi-Temporal Optical and Radar Satellite Images for Grassland Monitoring

    Directory of Open Access Journals (Sweden)

    Pauline Dusseux

    2014-06-01

    Full Text Available The aim of this study was to assess the ability of optical images, SAR (Synthetic Aperture Radar images and the combination of both types of data to discriminate between grasslands and crops in agricultural areas where cloud cover is very high most of the time, which restricts the use of visible and near-infrared satellite data. We compared the performances of variables extracted from four optical and five SAR satellite images with high/very high spatial resolutions acquired during the growing season. A vegetation index, namely the NDVI (Normalized Difference Vegetation Index, and two biophysical variables, the LAI (Leaf Area Index and the fCOVER (fraction of Vegetation Cover were computed using optical time series and polarization (HH, VV, HV, VH. The polarization ratio and polarimetric decomposition (Freeman–Durden and Cloude–Pottier were calculated using SAR time series. Then, variables derived from optical, SAR and both types of remotely-sensed data were successively classified using the Support Vector Machine (SVM technique. The results show that the classification accuracy of SAR variables is higher than those using optical data (0.98 compared to 0.81. They also highlight that the combination of optical and SAR time series data is of prime interest to discriminate grasslands from crops, allowing an improved classification accuracy.

  18. Extending a field-based Sonoran desert vegetation classification to a regional scale using optical and microwave satellite imagery

    Science.gov (United States)

    Shupe, Scott Marshall

    2000-10-01

    Vegetation mapping in and regions facilitates ecological studies, land management, and provides a record to which future land changes can be compared. Accurate and representative mapping of desert vegetation requires a sound field sampling program and a methodology to transform the data collected into a representative classification system. Time and cost constraints require that a remote sensing approach be used if such a classification system is to be applied on a regional scale. However, desert vegetation may be sparse and thus difficult to sense at typical satellite resolutions, especially given the problem of soil reflectance. This study was designed to address these concerns by conducting vegetation mapping research using field and satellite data from the US Army Yuma Proving Ground (USYPG) in Southwest Arizona. Line and belt transect data from the Army's Land Condition Trend Analysis (LCTA) Program were transformed into relative cover and relative density classification schemes using cluster analysis. Ordination analysis of the same data produced two and three-dimensional graphs on which the homogeneity of each vegetation class could be examined. It was found that the use of correspondence analysis (CA), detrended correspondence analysis (DCA), and non-metric multidimensional scaling (NMS) ordination methods was superior to the use of any single ordination method for helping to clarify between-class and within-class relationships in vegetation composition. Analysis of these between-class and within-class relationships were of key importance in examining how well relative cover and relative density schemes characterize the USYPG vegetation. Using these two classification schemes as reference data, maximum likelihood and artificial neural net classifications were then performed on a coregistered dataset consisting of a summer Landsat Thematic Mapper (TM) image, one spring and one summer ERS-1 microwave image, and elevation, slope, and aspect layers

  19. Texture-based classification for characterizing regions on remote sensing images

    Science.gov (United States)

    Borne, Frédéric; Viennois, Gaëlle

    2017-07-01

    Remote sensing classification methods mostly use only the physical properties of pixels or complex texture indexes but do not lead to recommendation for practical applications. Our objective was to design a texture-based method, called the Paysages A PRIori method (PAPRI), which works both at pixel and neighborhood level and which can handle different spatial scales of analysis. The aim was to stay close to the logic of a human expert and to deal with co-occurrences in a more efficient way than other methods. The PAPRI method is pixelwise and based on a comparison of statistical and spatial reference properties provided by the expert with local properties computed in varying size windows centered on the pixel. A specific distance is computed for different windows around the pixel and a local minimum leads to choosing the class in which the pixel is to be placed. The PAPRI method brings a significant improvement in classification quality for different kinds of images, including aerial, lidar, high-resolution satellite images as well as texture images from the Brodatz and Vistex databases. This work shows the importance of texture analysis in understanding remote sensing images and for future developments.

  20. Performance Evaluation of Machine Learning Algorithms for Urban Pattern Recognition from Multi-spectral Satellite Images

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2014-03-01

    Full Text Available In this study, a classification and performance evaluation framework for the recognition of urban patterns in medium (Landsat ETM, TM and MSS and very high resolution (WorldView-2, Quickbird, Ikonos multi-spectral satellite images is presented. The study aims at exploring the potential of machine learning algorithms in the context of an object-based image analysis and to thoroughly test the algorithm’s performance under varying conditions to optimize their usage for urban pattern recognition tasks. Four classification algorithms, Normal Bayes, K Nearest Neighbors, Random Trees and Support Vector Machines, which represent different concepts in machine learning (probabilistic, nearest neighbor, tree-based, function-based, have been selected and implemented on a free and open-source basis. Particular focus is given to assess the generalization ability of machine learning algorithms and the transferability of trained learning machines between different image types and image scenes. Moreover, the influence of the number and choice of training data, the influence of the size and composition of the feature vector and the effect of image segmentation on the classification accuracy is evaluated.

  1. Application of Object Based Classification and High Resolution Satellite Imagery for Savanna Ecosystem Analysis

    Directory of Open Access Journals (Sweden)

    Jane Southworth

    2010-12-01

    Full Text Available Savanna ecosystems are an important component of dryland regions and yet are exceedingly difficult to study using satellite imagery. Savannas are composed are varying amounts of trees, shrubs and grasses and typically traditional classification schemes or vegetation indices cannot differentiate across class type. This research utilizes object based classification (OBC for a region in Namibia, using IKONOS imagery, to help differentiate tree canopies and therefore woodland savanna, from shrub or grasslands. The methodology involved the identification and isolation of tree canopies within the imagery and the creation of tree polygon layers had an overall accuracy of 84%. In addition, the results were scaled up to a corresponding Landsat image of the same region, and the OBC results compared to corresponding pixel values of NDVI. The results were not compelling, indicating once more the problems of these traditional image analysis techniques for savanna ecosystems. Overall, the use of the OBC holds great promise for this ecosystem and could be utilized more frequently in studies of vegetation structure.

  2. Classification of iconic images

    OpenAIRE

    Zrianina, Mariia; Kopf, Stephan

    2016-01-01

    Iconic images represent an abstract topic and use a presentation that is intuitively understood within a certain cultural context. For example, the abstract topic “global warming” may be represented by a polar bear standing alone on an ice floe. Such images are widely used in media and their automatic classification can help to identify high-level semantic concepts. This paper presents a system for the classification of iconic images. It uses a variation of the Bag of Visual Words approach wi...

  3. An Automatic Cloud Detection Method for ZY-3 Satellite

    Directory of Open Access Journals (Sweden)

    CHEN Zhenwei

    2015-03-01

    Full Text Available Automatic cloud detection for optical satellite remote sensing images is a significant step in the production system of satellite products. For the browse images cataloged by ZY-3 satellite, the tree discriminate structure is adopted to carry out cloud detection. The image was divided into sub-images and their features were extracted to perform classification between clouds and grounds. However, due to the high complexity of clouds and surfaces and the low resolution of browse images, the traditional classification algorithms based on image features are of great limitations. In view of the problem, a prior enhancement processing to original sub-images before classification was put forward in this paper to widen the texture difference between clouds and surfaces. Afterwards, with the secondary moment and first difference of the images, the feature vectors were extended in multi-scale space, and then the cloud proportion in the image was estimated through comprehensive analysis. The presented cloud detection algorithm has already been applied to the ZY-3 application system project, and the practical experiment results indicate that this algorithm is capable of promoting the accuracy of cloud detection significantly.

  4. An Object-Based Image Analysis Approach for Detecting Penguin Guano in very High Spatial Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Chandi Witharana

    2016-04-01

    Full Text Available The logistical challenges of Antarctic field work and the increasing availability of very high resolution commercial imagery have driven an interest in more efficient search and classification of remotely sensed imagery. This exploratory study employed geographic object-based analysis (GEOBIA methods to classify guano stains, indicative of chinstrap and Adélie penguin breeding areas, from very high spatial resolution (VHSR satellite imagery and closely examined the transferability of knowledge-based GEOBIA rules across different study sites focusing on the same semantic class. We systematically gauged the segmentation quality, classification accuracy, and the reproducibility of fuzzy rules. A master ruleset was developed based on one study site and it was re-tasked “without adaptation” and “with adaptation” on candidate image scenes comprising guano stains. Our results suggest that object-based methods incorporating the spectral, textural, spatial, and contextual characteristics of guano are capable of successfully detecting guano stains. Reapplication of the master ruleset on candidate scenes without modifications produced inferior classification results, while adapted rules produced comparable or superior results compared to the reference image. This work provides a road map to an operational “image-to-assessment pipeline” that will enable Antarctic wildlife researchers to seamlessly integrate VHSR imagery into on-demand penguin population census.

  5. Automatic Classification of High Resolution Satellite Imagery - a Case Study for Urban Areas in the Kingdom of Saudi Arabia

    Science.gov (United States)

    Maas, A.; Alrajhi, M.; Alobeid, A.; Heipke, C.

    2017-05-01

    Updating topographic geospatial databases is often performed based on current remotely sensed images. To automatically extract the object information (labels) from the images, supervised classifiers are being employed. Decisions to be taken in this process concern the definition of the classes which should be recognised, the features to describe each class and the training data necessary in the learning part of classification. With a view to large scale topographic databases for fast developing urban areas in the Kingdom of Saudi Arabia we conducted a case study, which investigated the following two questions: (a) which set of features is best suitable for the classification?; (b) what is the added value of height information, e.g. derived from stereo imagery? Using stereoscopic GeoEye and Ikonos satellite data we investigate these two questions based on our research on label tolerant classification using logistic regression and partly incorrect training data. We show that in between five and ten features can be recommended to obtain a stable solution, that height information consistently yields an improved overall classification accuracy of about 5%, and that label noise can be successfully modelled and thus only marginally influences the classification results.

  6. Classification of boreal forest by satellite and inventory data using neural network approach

    Science.gov (United States)

    Romanov, A. A.

    2012-12-01

    The main objective of this research was to develop methodology for boreal (Siberian Taiga) land cover classification in a high accuracy level. The study area covers the territories of Central Siberian several parts along the Yenisei River (60-62 degrees North Latitude): the right bank includes mixed forest and dark taiga, the left - pine forests; so were taken as a high heterogeneity and statistically equal surfaces concerning spectral characteristics. Two main types of data were used: time series of middle spatial resolution satellite images (Landsat 5, 7 and SPOT4) and inventory datasets from the nature fieldworks (used for training samples sets preparation). Method of collecting field datasets included a short botany description (type/species of vegetation, density, compactness of the crowns, individual height and max/min diameters representative of each type, surface altitude of the plot), at the same time the geometric characteristic of each training sample unit corresponded to the spatial resolution of satellite images and geo-referenced (prepared datasets both of the preliminary processing and verification). The network of test plots was planned as irregular and determined by the landscape oriented approach. The main focus of the thematic data processing has been allocated for the use of neural networks (fuzzy logic inc.); therefore, the results of field studies have been converting input parameter of type / species of vegetation cover of each unit and the degree of variability. Proposed approach involves the processing of time series separately for each image mainly for the verification: shooting parameters taken into consideration (time, albedo) and thus expected to assess the quality of mapping. So the input variables for the networks were sensor bands, surface altitude, solar angels and land surface temperature (for a few experiments); also given attention to the formation of the formula class on the basis of statistical pre-processing of results of

  7. Land cover classification of Landsat 8 satellite data based on Fuzzy Logic approach

    Science.gov (United States)

    Taufik, Afirah; Sakinah Syed Ahmad, Sharifah

    2016-06-01

    The aim of this paper is to propose a method to classify the land covers of a satellite image based on fuzzy rule-based system approach. The study uses bands in Landsat 8 and other indices, such as Normalized Difference Water Index (NDWI), Normalized difference built-up index (NDBI) and Normalized Difference Vegetation Index (NDVI) as input for the fuzzy inference system. The selected three indices represent our main three classes called water, built- up land, and vegetation. The combination of the original multispectral bands and selected indices provide more information about the image. The parameter selection of fuzzy membership is performed by using a supervised method known as ANFIS (Adaptive neuro fuzzy inference system) training. The fuzzy system is tested for the classification on the land cover image that covers Klang Valley area. The results showed that the fuzzy system approach is effective and can be explored and implemented for other areas of Landsat data.

  8. Egypt satellite images for land surface characterization

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay

    images used for mapping the vegetation cover types and other land cover types in Egypt. The mapping ranges from 1 km resolution to 30 m resolution. The aim is to provide satellite image mapping with land surface characteristics relevant for roughness mapping.......Satellite images provide information on the land surface properties. From optical remote sensing images in the blue, green, red and near-infrared part of the electromagnetic spectrum it is possible to identify a large number of surface features. The report briefly describes different satellite...

  9. Spatial Data Exploring by Satellite Image Distributed Processing

    Science.gov (United States)

    Mihon, V. D.; Colceriu, V.; Bektas, F.; Allenbach, K.; Gvilava, M.; Gorgan, D.

    2012-04-01

    Our society needs and environmental predictions encourage the applications development, oriented on supervising and analyzing different Earth Science related phenomena. Satellite images could be explored for discovering information concerning land cover, hydrology, air quality, and water and soil pollution. Spatial and environment related data could be acquired by imagery classification consisting of data mining throughout the multispectral bands. The process takes in account a large set of variables such as satellite image types (e.g. MODIS, Landsat), particular geographic area, soil composition, vegetation cover, and generally the context (e.g. clouds, snow, and season). All these specific and variable conditions require flexible tools and applications to support an optimal search for the appropriate solutions, and high power computation resources. The research concerns with experiments on solutions of using the flexible and visual descriptions of the satellite image processing over distributed infrastructures (e.g. Grid, Cloud, and GPU clusters). This presentation highlights the Grid based implementation of the GreenLand application. The GreenLand application development is based on simple, but powerful, notions of mathematical operators and workflows that are used in distributed and parallel executions over the Grid infrastructure. Currently it is used in three major case studies concerning with Istanbul geographical area, Rioni River in Georgia, and Black Sea catchment region. The GreenLand application offers a friendly user interface for viewing and editing workflows and operators. The description involves the basic operators provided by GRASS [1] library as well as many other image related operators supported by the ESIP platform [2]. The processing workflows are represented as directed graphs giving the user a fast and easy way to describe complex parallel algorithms, without having any prior knowledge of any programming language or application commands

  10. Towards an Automatic Framework for Urban Settlement Mapping from Satellite Images: Applications of Geo-referenced Social Media and One Class Classification

    Science.gov (United States)

    Miao, Zelang

    2017-04-01

    Currently, urban dwellers comprise more than half of the world's population and this percentage is still dramatically increasing. The explosive urban growth over the next two decades poses long-term profound impact on people as well as the environment. Accurate and up-to-date delineation of urban settlements plays a fundamental role in defining planning strategies and in supporting sustainable development of urban settlements. In order to provide adequate data about urban extents and land covers, classifying satellite data has become a common practice, usually with accurate enough results. Indeed, a number of supervised learning methods have proven effective in urban area classification, but they usually depend on a large amount of training samples, whose collection is a time and labor expensive task. This issue becomes particularly serious when classifying large areas at the regional/global level. As an alternative to manual ground truth collection, in this work we use geo-referenced social media data. Cities and densely populated areas are an extremely fertile land for the production of individual geo-referenced data (such as GPS and social network data). Training samples derived from geo-referenced social media have several advantages: they are easy to collect, usually they are freely exploitable; and, finally, data from social media are spatially available in many locations, and with no doubt in most urban areas around the world. Despite these advantages, the selection of training samples from social media meets two challenges: 1) there are many duplicated points; 2) method is required to automatically label them as "urban/non-urban". The objective of this research is to validate automatic sample selection from geo-referenced social media and its applicability in one class classification for urban extent mapping from satellite images. The findings in this study shed new light on social media applications in the field of remote sensing.

  11. Detecting Weather Radar Clutter by Information Fusion With Satellite Images and Numerical Weather Prediction Model Output

    DEFF Research Database (Denmark)

    Bøvith, Thomas; Nielsen, Allan Aasbjerg; Hansen, Lars Kai

    2006-01-01

    A method for detecting clutter in weather radar images by information fusion is presented. Radar data, satellite images, and output from a numerical weather prediction model are combined and the radar echoes are classified using supervised classification. The presented method uses indirect...... information on precipitation in the atmosphere from Meteosat-8 multispectral images and near-surface temperature estimates from the DMI-HIRLAM-S05 numerical weather prediction model. Alternatively, an operational nowcasting product called 'Precipitating Clouds' based on Meteosat-8 input is used. A scale...

  12. Classification of Dust Days by Satellite Remotely Sensed Aerosol Products

    Science.gov (United States)

    Sorek-Hammer, M.; Cohen, A.; Levy, Robert C.; Ziv, B.; Broday, D. M.

    2013-01-01

    Considerable progress in satellite remote sensing (SRS) of dust particles has been seen in the last decade. From an environmental health perspective, such an event detection, after linking it to ground particulate matter (PM) concentrations, can proxy acute exposure to respirable particles of certain properties (i.e. size, composition, and toxicity). Being affected considerably by atmospheric dust, previous studies in the Eastern Mediterranean, and in Israel in particular, have focused on mechanistic and synoptic prediction, classification, and characterization of dust events. In particular, a scheme for identifying dust days (DD) in Israel based on ground PM10 (particulate matter of size smaller than 10 nm) measurements has been suggested, which has been validated by compositional analysis. This scheme requires information regarding ground PM10 levels, which is naturally limited in places with sparse ground-monitoring coverage. In such cases, SRS may be an efficient and cost-effective alternative to ground measurements. This work demonstrates a new model for identifying DD and non-DD (NDD) over Israel based on an integration of aerosol products from different satellite platforms (Moderate Resolution Imaging Spectroradiometer (MODIS) and Ozone Monitoring Instrument (OMI)). Analysis of ground-monitoring data from 2007 to 2008 in southern Israel revealed 67 DD, with more than 88 percent occurring during winter and spring. A Classification and Regression Tree (CART) model that was applied to a database containing ground monitoring (the dependent variable) and SRS aerosol product (the independent variables) records revealed an optimal set of binary variables for the identification of DD. These variables are combinations of the following primary variables: the calendar month, ground-level relative humidity (RH), the aerosol optical depth (AOD) from MODIS, and the aerosol absorbing index (AAI) from OMI. A logistic regression that uses these variables, coded as binary

  13. Satellite images to aircraft in flight. [GEOS image transmission feasibility analysis

    Science.gov (United States)

    Camp, D.; Luers, J. K.; Kadlec, P. W.

    1977-01-01

    A study has been initiated to evaluate the feasibility of transmitting selected GOES images to aircraft in flight. Pertinent observations that could be made from satellite images on board aircraft include jet stream activity, cloud/wind motion, cloud temperatures, tropical storm activity, and location of severe weather. The basic features of the Satellite Aircraft Flight Environment System (SAFES) are described. This system uses East GOES and West GOES satellite images, which are interpreted, enhanced, and then retransmitted to designated aircraft.

  14. Object-oriented classification using quasi-synchronous multispectral images (optical and radar) over agricultural surface

    Science.gov (United States)

    Marais Sicre, Claire; Baup, Frederic; Fieuzal, Remy

    2015-04-01

    In the context of climate change (with consequences on temperature and precipitation patterns), persons involved in agricultural management have the imperative to combine: sufficient productivity (as a response of the increment of the necessary foods) and durability of the resources (in order to restrain waste of water, fertilizer or environmental damages). To this end, a detailed knowledge of land use will improve the management of food and water, while preserving the ecosystems. Among the wide range of available monitoring tools, numerous studies demonstrated the interest of satellite images for agricultural mapping. Recently, the launch of several radar and optical sensors offer new perspectives for the multi-wavelength crop monitoring (Terrasar-X, Radarsat-2, Sentinel-1, Landsat-8…) allowing surface survey whatever the cloud conditions. Previous studies have demonstrated the interest of using multi-temporal approaches for crop classification, requiring several images for suitable classification results. Unfortunately, these approaches are limited (due to the satellite orbit cycle) and require waiting several days, week or month before offering an accurate land use map. The objective of this study is to compare the accuracy of object-oriented classification (random forest algorithm combined with vector layer coming from segmentation) to map winter crop (barley, rapeseed, grasslands and wheat) and soil states (bare soils with different surface roughness) using quasi-synchronous images. Satellite data are composed of multi-frequency and multi-polarization (HH, VV, HV and VH) images acquired near the 14th of April, 2010, over a studied area (90km²) located close to Toulouse in France. This is a region of alluvial plains and hills, which are mostly mixed farming and governed by a temperate climate. Remote sensing images are provided by Formosat-2 (04/18), Radarsat-2 (C-band, 04/15), Terrasar-X (X-band, 04/14) and ALOS (L-band, 04/14). Ground data are collected

  15. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  16. Comparison of sampling strategies for object-based classification of urban vegetation from Very High Resolution satellite images

    Science.gov (United States)

    Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas

    2016-09-01

    Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.

  17. Real time deforestation detection using ann and satellite images the Amazon rainforest study case

    CERN Document Server

    Nunes Kehl, Thiago; Roberto Veronez, Maurício; Cesar Cazella, Silvio

    2015-01-01

    The foremost aim of the present study was the development of a tool to detect daily deforestation in the Amazon rainforest, using satellite images from the MODIS/TERRA sensor and Artificial Neural Networks. The developed tool provides parameterization of the configuration for the neural network training to enable us to select the best neural architecture to address the problem. The tool makes use of confusion matrices to determine the degree of success of the network. A spectrum-temporal analysis of the study area was done on 57 images from May 20 to July 15, 2003 using the trained neural network. The analysis enabled verification of quality of the implemented neural network classification and also aided in understanding the dynamics of deforestation in the Amazon rainforest, thereby highlighting the vast potential of neural networks for image classification. However, the complex task of detection of predatory actions at the beginning, i.e., generation of consistent alarms, instead of false alarms has not bee...

  18. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    Science.gov (United States)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  19. Classification of JERS-1 Image Mosaic of Central Africa Using A Supervised Multiscale Classifier of Texture Features

    Science.gov (United States)

    Saatchi, Sassan; DeGrandi, Franco; Simard, Marc; Podest, Erika

    1999-01-01

    In this paper, a multiscale approach is introduced to classify the Japanese Research Satellite-1 (JERS-1) mosaic image over the Central African rainforest. A series of texture maps are generated from the 100 m mosaic image at various scales. Using a quadtree model and relating classes at each scale by a Markovian relationship, the multiscale images are classified from course to finer scale. The results are verified at various scales and the evolution of classification is monitored by calculating the error at each stage.

  20. ROOF TYPE SELECTION BASED ON PATCH-BASED CLASSIFICATION USING DEEP LEARNING FOR HIGH RESOLUTION SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    T. Partovi

    2017-05-01

    Full Text Available 3D building reconstruction from remote sensing image data from satellites is still an active research topic and very valuable for 3D city modelling. The roof model is the most important component to reconstruct the Level of Details 2 (LoD2 for a building in 3D modelling. While the general solution for roof modelling relies on the detailed cues (such as lines, corners and planes extracted from a Digital Surface Model (DSM, the correct detection of the roof type and its modelling can fail due to low quality of the DSM generated by dense stereo matching. To reduce dependencies of roof modelling on DSMs, the pansharpened satellite images as a rich resource of information are used in addition. In this paper, two strategies are employed for roof type classification. In the first one, building roof types are classified in a state-of-the-art supervised pre-trained convolutional neural network (CNN framework. In the second strategy, deep features from deep layers of different pre-trained CNN model are extracted and then an RBF kernel using SVM is employed to classify the building roof type. Based on roof complexity of the scene, a roof library including seven types of roofs is defined. A new semi-automatic method is proposed to generate training and test patches of each roof type in the library. Using the pre-trained CNN model does not only decrease the computation time for training significantly but also increases the classification accuracy.

  1. Classification in Medical Imaging

    DEFF Research Database (Denmark)

    Chen, Chen

    Classification is extensively used in the context of medical image analysis for the purpose of diagnosis or prognosis. In order to classify image content correctly, one needs to extract efficient features with discriminative properties and build classifiers based on these features. In addition...... on characterizing human faces and emphysema disease in lung CT images....

  2. NEAR REAL-TIME AUTOMATIC MARINE VESSEL DETECTION ON OPTICAL SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    G. Máttyus

    2013-05-01

    Full Text Available Vessel monitoring and surveillance is important for maritime safety and security, environment protection and border control. Ship monitoring systems based on Synthetic-aperture Radar (SAR satellite images are operational. On SAR images the ships made of metal with sharp edges appear as bright dots and edges, therefore they can be well distinguished from the water. Since the radar is independent from the sun light and can acquire images also by cloudy weather and rain, it provides a reliable service. Vessel detection from spaceborne optical images (VDSOI can extend the SAR based systems by providing more frequent revisit times and overcoming some drawbacks of the SAR images (e.g. lower spatial resolution, difficult human interpretation. Optical satellite images (OSI can have a higher spatial resolution thus enabling the detection of smaller vessels and enhancing the vessel type classification. The human interpretation of an optical image is also easier than as of SAR image. In this paper I present a rapid automatic vessel detection method which uses pattern recognition methods, originally developed in the computer vision field. In the first step I train a binary classifier from image samples of vessels and background. The classifier uses simple features which can be calculated very fast. For the detection the classifier is slided along the image in various directions and scales. The detector has a cascade structure which rejects most of the background in the early stages which leads to faster execution. The detections are grouped together to avoid multiple detections. Finally the position, size(i.e. length and width and heading of the vessels is extracted from the contours of the vessel. The presented method is parallelized, thus it runs fast (in minutes for 16000 × 16000 pixels image on a multicore computer, enabling near real-time applications, e.g. one hour from image acquisition to end user.

  3. Near Real-Time Automatic Marine Vessel Detection on Optical Satellite Images

    Science.gov (United States)

    Máttyus, G.

    2013-05-01

    Vessel monitoring and surveillance is important for maritime safety and security, environment protection and border control. Ship monitoring systems based on Synthetic-aperture Radar (SAR) satellite images are operational. On SAR images the ships made of metal with sharp edges appear as bright dots and edges, therefore they can be well distinguished from the water. Since the radar is independent from the sun light and can acquire images also by cloudy weather and rain, it provides a reliable service. Vessel detection from spaceborne optical images (VDSOI) can extend the SAR based systems by providing more frequent revisit times and overcoming some drawbacks of the SAR images (e.g. lower spatial resolution, difficult human interpretation). Optical satellite images (OSI) can have a higher spatial resolution thus enabling the detection of smaller vessels and enhancing the vessel type classification. The human interpretation of an optical image is also easier than as of SAR image. In this paper I present a rapid automatic vessel detection method which uses pattern recognition methods, originally developed in the computer vision field. In the first step I train a binary classifier from image samples of vessels and background. The classifier uses simple features which can be calculated very fast. For the detection the classifier is slided along the image in various directions and scales. The detector has a cascade structure which rejects most of the background in the early stages which leads to faster execution. The detections are grouped together to avoid multiple detections. Finally the position, size(i.e. length and width) and heading of the vessels is extracted from the contours of the vessel. The presented method is parallelized, thus it runs fast (in minutes for 16000 × 16000 pixels image) on a multicore computer, enabling near real-time applications, e.g. one hour from image acquisition to end user.

  4. Analysis and Assessment of Land Use Change in Alexandria, Egypt Using Satellite Images, GIS, and Modelling Techniques

    International Nuclear Information System (INIS)

    Abdou Azaz, L.K.

    2008-01-01

    Alexandria is the second largest urban governorate in Egypt and has seen significant urban growth in its modern and contemporary history. This study investigates the urban growth phenomenon in Alexandria, Egypt, using the integration of remote sensing and GIS. The urban physical expansion and change were detected using Landsat satellite images. The satellite images of years 1984 and 1993 were first geo referenced, achieving a very small RMSE that provided high accuracy data for satellite image analysis. Then, the images were classified using a tailored classification scheme with accuracy of 93.82% and 95.27% for 1984 and 1993 images respectively. This high accuracy enabled detecting land use/land cover changes with high confidence using a post-classification comparison method. One of the most important findings here is the loss of cultivated land in favour of urban expansion. If the current loss rates continued, 75% of green lands would be lost by year 2191. These hazardous rates call for an urban growth management policy that can preserve such valuable resources to achieve sustainable urban development. Modelling techniques can help in defining the scenarios of urban growth. In this study, the SLEUTH urban growth model was applied to predict future urban expansion in Alexandria until the year 2055. The application of this model in Alexandria of Egypt with its different environmental characteristics is the first application outside USA and Europe. The results revealed that future urban growth would continue along the edges of the current urban extent. This means that the cultivated lands in the east and the southeast of the city will be decreased. To deal with such crisis, there is a serious need for a comprehensive urban growth management programme that can be based on the best practices in similar situations

  5. Landuse change detection in a surface coal mine area using multi-temporal high resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Demirel, N.; Duzgun, S.; Kemal Emil, M. [Middle East Technical Univ., Ankara (Turkey). Dept. of Mining Engineering

    2010-07-01

    Changes in the landcover and landuse of a mine area can be caused by surface mining activities, exploitation of ore and stripping and dumping overburden. In order to identify the long-term impacts of mining on the environment and land cover, these changes must be continuously monitored. A facility to regularly observe the progress of surface mining and reclamation is important for effective enforcement of mining and environmental regulations. Remote sensing provides a powerful tool to obtain rigorous data and reduce the need for time-consuming and expensive field measurements. The purpose of this study was to conduct post classification change detection for identifying, quantifying, and analyzing the spatial response of landscape due to surface lignite coal mining activities in Goynuk, Bolu, Turkey, from 2004 to 2008. The paper presented the research algorithm which involved acquiring multi temporal high resolution satellite data; preprocessing the data; performing image classification using maximum likelihood classification algorithm and performing accuracy assessment on the classification results; performing post classification change detection algorithm; and analyzing the results. Specifically, the paper discussed the study area, data and methodology, and image preprocessing using radiometric correction. Image classification and change detection were also discussed. It was concluded that the mine and dump area decreased by 192.5 ha from 2004 to 2008 and was caused by the diminishing reserves in the area and decline in the required production. 5 refs., 2 tabs., 4 figs.

  6. Supervised Classification Performance of Multispectral Images

    OpenAIRE

    Perumal, K.; Bhaskaran, R.

    2010-01-01

    Nowadays government and private agencies use remote sensing imagery for a wide range of applications from military applications to farm development. The images may be a panchromatic, multispectral, hyperspectral or even ultraspectral of terra bytes. Remote sensing image classification is one amongst the most significant application worlds for remote sensing. A few number of image classification algorithms have proved good precision in classifying remote sensing data. But, of late, due to the ...

  7. Accuracy assessment between different image classification ...

    African Journals Online (AJOL)

    What image classification does is to assign pixel to a particular land cover and land use type that has the most similar spectral signature. However, there are possibilities that different methods or algorithms of image classification of the same data set could produce appreciable variant results in the sizes, shapes and areas of ...

  8. Satellite image analysis and a hybrid ESSS/ANN model to forecast solar irradiance in the tropics

    International Nuclear Information System (INIS)

    Dong, Zibo; Yang, Dazhi; Reindl, Thomas; Walsh, Wilfred M.

    2014-01-01

    Highlights: • Satellite image analysis is performed and cloud cover index is classified using self-organizing maps (SOM). • The ESSS model is used to forecast cloud cover index. • Solar irradiance is estimated using multi-layer perceptron (MLP). • The proposed model shows better accuracy than other investigated models. - Abstract: We forecast hourly solar irradiance time series using satellite image analysis and a hybrid exponential smoothing state space (ESSS) model together with artificial neural networks (ANN). Since cloud cover is the major factor affecting solar irradiance, cloud detection and classification are crucial to forecast solar irradiance. Geostationary satellite images provide cloud information, allowing a cloud cover index to be derived and analysed using self-organizing maps (SOM). Owing to the stochastic nature of cloud generation in tropical regions, the ESSS model is used to forecast cloud cover index. Among different models applied in ANN, we favour the multi-layer perceptron (MLP) to derive solar irradiance based on the cloud cover index. This hybrid model has been used to forecast hourly solar irradiance in Singapore and the technique is found to outperform traditional forecasting models

  9. The Generalized Gamma-DBN for High-Resolution SAR Image Classification

    Directory of Open Access Journals (Sweden)

    Zhiqiang Zhao

    2018-06-01

    Full Text Available With the increase of resolution, effective characterization of synthetic aperture radar (SAR image becomes one of the most critical problems in many earth observation applications. Inspired by deep learning and probability mixture models, a generalized Gamma deep belief network (g Γ-DBN is proposed for SAR image statistical modeling and land-cover classification in this work. Specifically, a generalized Gamma-Bernoulli restricted Boltzmann machine (gΓB-RBM is proposed to capture high-order statistical characterizes from SAR images after introducing the generalized Gamma distribution. After stacking the g Γ B-RBM and several standard binary RBMs in a hierarchical manner, a gΓ-DBN is constructed to learn high-level representation of different SAR land-covers. Finally, a discriminative neural network is constructed by adding an additional predict layer for different land-covers over the constructed deep structure. Performance of the proposed approach is evaluated via several experiments on some high-resolution SAR image patch sets and two large-scale scenes which are captured by ALOS PALSAR-2 and COSMO-SkyMed satellites respectively.

  10. THERMAL AND VISIBLE SATELLITE IMAGE FUSION USING WAVELET IN REMOTE SENSING AND SATELLITE IMAGE PROCESSING

    Directory of Open Access Journals (Sweden)

    A. H. Ahrari

    2017-09-01

    Full Text Available Multimodal remote sensing approach is based on merging different data in different portions of electromagnetic radiation that improves the accuracy in satellite image processing and interpretations. Remote Sensing Visible and thermal infrared bands independently contain valuable spatial and spectral information. Visible bands make enough information spatially and thermal makes more different radiometric and spectral information than visible. However low spatial resolution is the most important limitation in thermal infrared bands. Using satellite image fusion, it is possible to merge them as a single thermal image that contains high spectral and spatial information at the same time. The aim of this study is a performance assessment of thermal and visible image fusion quantitatively and qualitatively with wavelet transform and different filters. In this research, wavelet algorithm (Haar and different decomposition filters (mean.linear,ma,min and rand for thermal and panchromatic bands of Landast8 Satellite were applied as shortwave and longwave fusion method . Finally, quality assessment has been done with quantitative and qualitative approaches. Quantitative parameters such as Entropy, Standard Deviation, Cross Correlation, Q Factor and Mutual Information were used. For thermal and visible image fusion accuracy assessment, all parameters (quantitative and qualitative must be analysed with respect to each other. Among all relevant statistical factors, correlation has the most meaningful result and similarity to the qualitative assessment. Results showed that mean and linear filters make better fused images against the other filters in Haar algorithm. Linear and mean filters have same performance and there is not any difference between their qualitative and quantitative results.

  11. A kernel-based multi-feature image representation for histopathology image classification

    International Nuclear Information System (INIS)

    Moreno J; Caicedo J Gonzalez F

    2010-01-01

    This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of latent semantic analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, support vector machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that; the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  12. A KERNEL-BASED MULTI-FEATURE IMAGE REPRESENTATION FOR HISTOPATHOLOGY IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    J Carlos Moreno

    2010-09-01

    Full Text Available This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of Latent Semantic Analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, Support Vector Machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that, the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  13. The EO-1 hyperion and advanced land imager sensors for use in tundra classification studies within the Upper Kuparuk River Basin, Alaska

    Science.gov (United States)

    Hall-Brown, Mary

    The heterogeneity of Arctic vegetation can make land cover classification vey difficult when using medium to small resolution imagery (Schneider et al., 2009; Muller et al., 1999). Using high radiometric and spatial resolution imagery, such as the SPOT 5 and IKONOS satellites, have helped arctic land cover classification accuracies rise into the 80 and 90 percentiles (Allard, 2003; Stine et al., 2010; Muller et al., 1999). However, those increases usually come at a high price. High resolution imagery is very expensive and can often add tens of thousands of dollars onto the cost of the research. The EO-1 satellite launched in 2002 carries two sensors that have high specral and/or high spatial resolutions and can be an acceptable compromise between the resolution versus cost issues. The Hyperion is a hyperspectral sensor with the capability of collecting 242 spectral bands of information. The Advanced Land Imager (ALI) is an advanced multispectral sensor whose spatial resolution can be sharpened to 10 meters. This dissertation compares the accuracies of arctic land cover classifications produced by the Hyperion and ALI sensors to the classification accuracies produced by the Systeme Pour l' Observation de le Terre (SPOT), the Landsat Thematic Mapper (TM) and the Landsat Enhanced Thematic Mapper Plus (ETM+) sensors. Hyperion and ALI images from August 2004 were collected over the Upper Kuparuk River Basin, Alaska. Image processing included the stepwise discriminant analysis of pixels that were positively classified from coinciding ground control points, geometric and radiometric correction, and principle component analysis. Finally, stratified random sampling was used to perform accuracy assessments on satellite derived land cover classifications. Accuracy was estimated from an error matrix (confusion matrix) that provided the overall, producer's and user's accuracies. This research found that while the Hyperion sensor produced classfication accuracies that were

  14. Vegetation mapping from high-resolution satellite images in the heterogeneous arid environments of Socotra Island (Yemen)

    Science.gov (United States)

    Malatesta, Luca; Attorre, Fabio; Altobelli, Alfredo; Adeeb, Ahmed; De Sanctis, Michele; Taleb, Nadim M.; Scholte, Paul T.; Vitale, Marcello

    2013-01-01

    Socotra Island (Yemen), a global biodiversity hotspot, is characterized by high geomorphological and biological diversity. In this study, we present a high-resolution vegetation map of the island based on combining vegetation analysis and classification with remote sensing. Two different image classification approaches were tested to assess the most accurate one in mapping the vegetation mosaic of Socotra. Spectral signatures of the vegetation classes were obtained through a Gaussian mixture distribution model, and a sequential maximum a posteriori (SMAP) classification was applied to account for the heterogeneity and the complex spatial pattern of the arid vegetation. This approach was compared to the traditional maximum likelihood (ML) classification. Satellite data were represented by a RapidEye image with 5 m pixel resolution and five spectral bands. Classified vegetation relevés were used to obtain the training and evaluation sets for the main plant communities. Postclassification sorting was performed to adjust the classification through various rule-based operations. Twenty-eight classes were mapped, and SMAP, with an accuracy of 87%, proved to be more effective than ML (accuracy: 66%). The resulting map will represent an important instrument for the elaboration of conservation strategies and the sustainable use of natural resources in the island.

  15. Analysis on the Utility of Satellite Imagery for Detection of Agricultural Facility

    Science.gov (United States)

    Kang, J.-M.; Baek, S.-H.; Jung, K.-Y.

    2012-07-01

    Now that the agricultural facilities are being increase owing to development of technology and diversification of agriculture and the ratio of garden crops that are imported a lot and the crops cultivated in facilities are raised in Korea, the number of vinyl greenhouses is tending upward. So, it is important to grasp the distribution of vinyl greenhouses as much as that of rice fields, dry fields and orchards, but it is difficult to collect the information of wide areas economically and correctly. Remote sensing using satellite imagery is able to obtain data of wide area at the same time, quickly and cost-effectively collect, monitor and analyze information from every object on earth. In this study, in order to analyze the utilization of satellite imagery at detection of agricultural facility, image classification was performed about the agricultural facility, vinyl greenhouse using Formosat-2 satellite imagery. The training set of sea, vegetation, building, bare ground and vinyl greenhouse was set to monitor the agricultural facilities of the object area and the training set for the vinyl greenhouses that are main monitoring object was classified and set again into 3 types according the spectral characteristics. The image classification using 4 kinds of supervise classification methods applied by the same training set were carried out to grasp the image classification method which is effective for monitoring agricultural facilities. And, in order to minimize the misclassification appeared in the classification using the spectral information, the accuracy of classification was intended to be raised by adding texture information. The results of classification were analyzed regarding the accuracy comparing with that of naked-eyed detection. As the results of classification, the method of Mahalanobis distance was shown as more efficient than other methods and the accuracy of classification was higher when adding texture information. Hence the more effective

  16. ANALYSIS ON THE UTILITY OF SATELLITE IMAGERY FOR DETECTION OF AGRICULTURAL FACILITY

    Directory of Open Access Journals (Sweden)

    J.-M. Kang

    2012-07-01

    Full Text Available Now that the agricultural facilities are being increase owing to development of technology and diversification of agriculture and the ratio of garden crops that are imported a lot and the crops cultivated in facilities are raised in Korea, the number of vinyl greenhouses is tending upward. So, it is important to grasp the distribution of vinyl greenhouses as much as that of rice fields, dry fields and orchards, but it is difficult to collect the information of wide areas economically and correctly. Remote sensing using satellite imagery is able to obtain data of wide area at the same time, quickly and cost-effectively collect, monitor and analyze information from every object on earth. In this study, in order to analyze the utilization of satellite imagery at detection of agricultural facility, image classification was performed about the agricultural facility, vinyl greenhouse using Formosat-2 satellite imagery. The training set of sea, vegetation, building, bare ground and vinyl greenhouse was set to monitor the agricultural facilities of the object area and the training set for the vinyl greenhouses that are main monitoring object was classified and set again into 3 types according the spectral characteristics. The image classification using 4 kinds of supervise classification methods applied by the same training set were carried out to grasp the image classification method which is effective for monitoring agricultural facilities. And, in order to minimize the misclassification appeared in the classification using the spectral information, the accuracy of classification was intended to be raised by adding texture information. The results of classification were analyzed regarding the accuracy comparing with that of naked-eyed detection. As the results of classification, the method of Mahalanobis distance was shown as more efficient than other methods and the accuracy of classification was higher when adding texture information. Hence the more

  17. Spectrally Consistent Satellite Image Fusion with Improved Image Priors

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Aanæs, Henrik; Jensen, Thomas B.S.

    2006-01-01

    Here an improvement to our previous framework for satellite image fusion is presented. A framework purely based on the sensor physics and on prior assumptions on the fused image. The contributions of this paper are two fold. Firstly, a method for ensuring 100% spectrally consistency is proposed......, even when more sophisticated image priors are applied. Secondly, a better image prior is introduced, via data-dependent image smoothing....

  18. Classification of line features from remote sensing data

    OpenAIRE

    Kolankiewiczová, Soňa

    2009-01-01

    This work deals with object-based classification of high resolution data. The aim of the thesis (paper, work) is to develope an acceptable classification process of linear features (roads and railways) from high-resolution satellite images. The first part shows different approaches of the linear feature classification and compares theoretic differences between an object-oriented and a pixel-based classification. Linear feature classification was created in the second part. The high-resolution...

  19. Classification of remotely sensed images

    CSIR Research Space (South Africa)

    Dudeni, N

    2008-10-01

    Full Text Available For this research, the researchers examine various existing image classification algorithms with the aim of demonstrating how these algorithms can be applied to remote sensing images. These algorithms are broadly divided into supervised...

  20. An Attempt to automate the lithological classification of rocks using geological, gamma-spectrometric and satellite image datasets

    International Nuclear Information System (INIS)

    Fouad, M. K.; Mielik, M. L.; Gharieb, A. N.

    2004-01-01

    The present study aims essentially at proving that the application of the integrated airborne gamma spectrometric and satellite image data is capable of refining the mapped surface geology, and identification of anomalous zones of radioelement content that could provide favorable exploration targets for radioactive mineralizations.The application of the appropriate statistical technique to correlate between satellite image data and gamma-spectrometric data is of great significance in this respect. Experience shows that Landsat T M data in 7 spectral bands are successfully used in such studies rather than MSS. Multivariate statistical analysis techniques are applied to airborne spectrometric and different spectral Landsat T M data. Reduction of the data from n-dimensionality, both qualitatively as color composite image, and quantitatively, as principal component analysis, is performed using some statistical control parameters. This technique shows distinct efficiency in defining areas where different lit ho facies occur. An area located at the north of the Eastern Desert of Egypt, north of Hurgada town, was chosen to test the proposed technique of integrated interpretation of data of different physical nature. The reduced data are represented and interpreted both qualitatively and quantitatively. The advantages and limitations of applying such technique to the different airborne spectrometric, and Landsat T M data are identified. (authors)

  1. Modulation Classification of Satellite Communication Signals Using Cumulants and Neural Networks

    Science.gov (United States)

    Smith, Aaron; Evans, Michael; Downey, Joseph

    2017-01-01

    National Aeronautics and Space Administration (NASA)'s future communication architecture is evaluating cognitive technologies and increased system intelligence. These technologies are expected to reduce the operational complexity of the network, increase science data return, and reduce interference to self and others. In order to increase situational awareness, signal classification algorithms could be applied to identify users and distinguish sources of interference. A significant amount of previous work has been done in the area of automatic signal classification for military and commercial applications. As a preliminary step, we seek to develop a system with the ability to discern signals typically encountered in satellite communication. Proposed is an automatic modulation classifier which utilizes higher order statistics (cumulants) and an estimate of the signal-to-noise ratio. These features are extracted from baseband symbols and then processed by a neural network for classification. The modulation types considered are phase-shift keying (PSK), amplitude and phase-shift keying (APSK),and quadrature amplitude modulation (QAM). Physical layer properties specific to the Digital Video Broadcasting - Satellite- Second Generation (DVB-S2) standard, such as pilots and variable ring ratios, are also considered. This paper will provide simulation results of a candidate modulation classifier, and performance will be evaluated over a range of signal-to-noise ratios, frequency offsets, and nonlinear amplifier distortions.

  2. Involvement of Machine Learning for Breast Cancer Image Classification: A Survey.

    Science.gov (United States)

    Nahid, Abdullah-Al; Kong, Yinan

    2017-01-01

    Breast cancer is one of the largest causes of women's death in the world today. Advance engineering of natural image classification techniques and Artificial Intelligence methods has largely been used for the breast-image classification task. The involvement of digital image classification allows the doctor and the physicians a second opinion, and it saves the doctors' and physicians' time. Despite the various publications on breast image classification, very few review papers are available which provide a detailed description of breast cancer image classification techniques, feature extraction and selection procedures, classification measuring parameterizations, and image classification findings. We have put a special emphasis on the Convolutional Neural Network (CNN) method for breast image classification. Along with the CNN method we have also described the involvement of the conventional Neural Network (NN), Logic Based classifiers such as the Random Forest (RF) algorithm, Support Vector Machines (SVM), Bayesian methods, and a few of the semisupervised and unsupervised methods which have been used for breast image classification.

  3. Involvement of Machine Learning for Breast Cancer Image Classification: A Survey

    Directory of Open Access Journals (Sweden)

    Abdullah-Al Nahid

    2017-01-01

    Full Text Available Breast cancer is one of the largest causes of women’s death in the world today. Advance engineering of natural image classification techniques and Artificial Intelligence methods has largely been used for the breast-image classification task. The involvement of digital image classification allows the doctor and the physicians a second opinion, and it saves the doctors’ and physicians’ time. Despite the various publications on breast image classification, very few review papers are available which provide a detailed description of breast cancer image classification techniques, feature extraction and selection procedures, classification measuring parameterizations, and image classification findings. We have put a special emphasis on the Convolutional Neural Network (CNN method for breast image classification. Along with the CNN method we have also described the involvement of the conventional Neural Network (NN, Logic Based classifiers such as the Random Forest (RF algorithm, Support Vector Machines (SVM, Bayesian methods, and a few of the semisupervised and unsupervised methods which have been used for breast image classification.

  4. Northern Everglades, Florida, satellite image map

    Science.gov (United States)

    Thomas, Jean-Claude; Jones, John W.

    2002-01-01

    These satellite image maps are one product of the USGS Land Characteristics from Remote Sensing project, funded through the USGS Place-Based Studies Program with support from the Everglades National Park. The objective of this project is to develop and apply innovative remote sensing and geographic information system techniques to map the distribution of vegetation, vegetation characteristics, and related hydrologic variables through space and over time. The mapping and description of vegetation characteristics and their variations are necessary to accurately simulate surface hydrology and other surface processes in South Florida and to monitor land surface changes. As part of this research, data from many airborne and satellite imaging systems have been georeferenced and processed to facilitate data fusion and analysis. These image maps were created using image fusion techniques developed as part of this project.

  5. Cloud Classification in Wide-Swath Passive Sensor Images Aided by Narrow-Swath Active Sensor Data

    Directory of Open Access Journals (Sweden)

    Hongxia Wang

    2018-05-01

    Full Text Available It is a challenge to distinguish between different cloud types because of the complexity and diversity of cloud coverage, which is a significant clutter source that impacts on target detection and identification from the images of space-based infrared sensors. In this paper, a novel strategy for cloud classification in wide-swath passive sensor images is developed, which is aided by narrow-swath active sensor data. The strategy consists of three steps, that is, the orbit registration, most matching donor pixel selection, and cloud type assignment for each recipient pixel. A new criterion for orbit registration is proposed so as to improve the matching accuracy. The most matching donor pixel is selected via the Euclidean distance and the square sum of the radiance relative differences between the recipient and the potential donor pixels. Each recipient pixel is then assigned a cloud type that corresponds to the most matching donor. The cloud classification of the Moderate Resolution Imaging Spectroradiometer (MODIS images is performed with the aid of the data from Cloud Profiling Radar (CPR. The results are compared with the CloudSat product 2B-CLDCLASS, as well as those that are obtained using the method of the International Satellite Cloud Climatology Project (ISCCP, which demonstrates the superior classification performance of the proposed strategy.

  6. Improving Spectral Image Classification through Band-Ratio Optimization and Pixel Clustering

    Science.gov (United States)

    O'Neill, M.; Burt, C.; McKenna, I.; Kimblin, C.

    2017-12-01

    The Underground Nuclear Explosion Signatures Experiment (UNESE) seeks to characterize non-prompt observables from underground nuclear explosions (UNE). As part of this effort, we evaluated the ability of DigitalGlobe's WorldView-3 (WV3) to detect and map UNE signatures. WV3 is the current state-of-the-art, commercial, multispectral imaging satellite; however, it has relatively limited spectral and spatial resolutions. These limitations impede image classifiers from detecting targets that are spatially small and lack distinct spectral features. In order to improve classification results, we developed custom algorithms to reduce false positive rates while increasing true positive rates via a band-ratio optimization and pixel clustering front-end. The clusters resulting from these algorithms were processed with standard spectral image classifiers such as Mixture-Tuned Matched Filter (MTMF) and Adaptive Coherence Estimator (ACE). WV3 and AVIRIS data of Cuprite, Nevada, were used as a validation data set. These data were processed with a standard classification approach using MTMF and ACE algorithms. They were also processed using the custom front-end prior to the standard approach. A comparison of the results shows that the custom front-end significantly increases the true positive rate and decreases the false positive rate.This work was done by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy. DOE/NV/25946-3283.

  7. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    Science.gov (United States)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  8. ANALYSIS OF THE EFFECTS OF IMAGE QUALITY ON DIGITAL MAP GENERATION FROM SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    H. Kim

    2012-07-01

    Full Text Available High resolution satellite images are widely used to produce and update a digital map since they became widely available. It is well known that the accuracy of digital map produced from satellite images is decided largely by the accuracy of geometric modelling. However digital maps are made by a series of photogrammetric workflow. Therefore the accuracy of digital maps are also affected by the quality of satellite images, such as image interpretability. For satellite images, parameters such as Modulation Transfer Function(MTF, Signal to Noise Ratio(SNR and Ground Sampling Distance(GSD are used to present images quality. Our previous research stressed that such quality parameters may not represent the quality of image products such as digital maps and that parameters for image interpretability such as Ground Resolved Distance(GRD and National Imagery Interpretability Rating Scale(NIIRS need to be considered. In this study, we analyzed the effects of the image quality on accuracy of digital maps produced by satellite images. QuickBird, IKONOS and KOMPSAT-2 imagery were used to analyze as they have similar GSDs. We measured various image quality parameters mentioned above from these images. Then we produced digital maps from the images using a digital photogrammetric workstation. We analyzed the accuracy of the digital maps in terms of their location accuracy and their level of details. Then we compared the correlation between various image quality parameters and the accuracy of digital maps. The results of this study showed that GRD and NIIRS were more critical for map production then GSD, MTF or SNR.

  9. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    Science.gov (United States)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  10. A graph-based approach to detect spatiotemporal dynamics in satellite image time series

    Science.gov (United States)

    Guttler, Fabio; Ienco, Dino; Nin, Jordi; Teisseire, Maguelonne; Poncelet, Pascal

    2017-08-01

    Enhancing the frequency of satellite acquisitions represents a key issue for Earth Observation community nowadays. Repeated observations are crucial for monitoring purposes, particularly when intra-annual process should be taken into account. Time series of images constitute a valuable source of information in these cases. The goal of this paper is to propose a new methodological framework to automatically detect and extract spatiotemporal information from satellite image time series (SITS). Existing methods dealing with such kind of data are usually classification-oriented and cannot provide information about evolutions and temporal behaviors. In this paper we propose a graph-based strategy that combines object-based image analysis (OBIA) with data mining techniques. Image objects computed at each individual timestamp are connected across the time series and generates a set of evolution graphs. Each evolution graph is associated to a particular area within the study site and stores information about its temporal evolution. Such information can be deeply explored at the evolution graph scale or used to compare the graphs and supply a general picture at the study site scale. We validated our framework on two study sites located in the South of France and involving different types of natural, semi-natural and agricultural areas. The results obtained from a Landsat SITS support the quality of the methodological approach and illustrate how the framework can be employed to extract and characterize spatiotemporal dynamics.

  11. Significance of perceptually relevant image decolorization for scene classification

    Science.gov (United States)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  12. APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Jabari

    2017-08-01

    Full Text Available Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan camera along with either a colour camera or a four-band multi-spectral (MS camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC. We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  13. Application of Sensor Fusion to Improve Uav Image Classification

    Science.gov (United States)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  14. Cloud classification using whole-sky imager data

    Energy Technology Data Exchange (ETDEWEB)

    Buch, K.A. Jr.; Sun, C.H.; Thorne, L.R. [Sandia National Labs., Livermore, CA (United States)

    1996-04-01

    Clouds are one of the most important moderators of the earth radiation budget and one of the least understood. The effect that clouds have on the reflection and absorption of solar and terrestrial radiation is strongly influenced by their shape, size, and composition. Physically accurate parameterization of clouds is necessary for any general circulation model (GCM) to yield meaningful results. The work presented here is part of a larger project that is aimed at producing realistic three-dimensional (3D) volume renderings of cloud scenes based on measured data from real cloud scenes. These renderings will provide the important shape information for parameterizing GCMs. The specific goal of the current study is to develop an algorithm that automatically classifies (by cloud type) the clouds observed in the scene. This information will assist the volume rendering program in determining the shape of the cloud. Much work has been done on cloud classification using multispectral satellite images. Most of these references use some kind of texture measure to distinguish the different cloud types and some also use topological features (such as cloud/sky connectivity or total number of clouds). A wide variety of classification methods has been used, including neural networks, various types of clustering, and thresholding. The work presented here uses binary decision trees to distinguish the different cloud types based on cloud features vectors.

  15. Land Cover Classification from Multispectral Data Using Computational Intelligence Tools: A Comparative Study

    Directory of Open Access Journals (Sweden)

    André Mora

    2017-11-01

    Full Text Available This article discusses how computational intelligence techniques are applied to fuse spectral images into a higher level image of land cover distribution for remote sensing, specifically for satellite image classification. We compare a fuzzy-inference method with two other computational intelligence methods, decision trees and neural networks, using a case study of land cover classification from satellite images. Further, an unsupervised approach based on k-means clustering has been also taken into consideration for comparison. The fuzzy-inference method includes training the classifier with a fuzzy-fusion technique and then performing land cover classification using reinforcement aggregation operators. To assess the robustness of the four methods, a comparative study including three years of land cover maps for the district of Mandimba, Niassa province, Mozambique, was undertaken. Our results show that the fuzzy-fusion method performs similarly to decision trees, achieving reliable classifications; neural networks suffer from overfitting; while k-means clustering constitutes a promising technique to identify land cover types from unknown areas.

  16. Medical image transmission via communication satellite: evaluation of ultrasonographic images.

    Science.gov (United States)

    Suzuki, H; Horikoshi, H; Shiba, H; Shimamoto, S

    1996-01-01

    As compared with terrestrial circuits, communication satellites possess superior characteristics such as wide area coverage, broadcasting functions, high capacity, and resistance to disasters. Utilizing the narrow band channel (64 kbps) of the stationary communication satellite JCSAT1 located at an altitude of 36,000 km above the equator, we investigated satelliterelayed dynamic medical images transmitted by video signals, using hepatic ultrasonography as a model. We conclude that the "variable playing speed transmission scheme" proposed by us is effective for the transmission of dynamic images in the narrow band channel. This promises to permit diverse utilization and applications for purposes such as the transmission of other types of ultrasonic images as well as remotely directed medical diagnosis and treatment.

  17. Semantic Document Image Classification Based on Valuable Text Pattern

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2011-01-01

    Full Text Available Knowledge extraction from detected document image is a complex problem in the field of information technology. This problem becomes more intricate when we know, a negligible percentage of the detected document images are valuable. In this paper, a segmentation-based classification algorithm is used to analysis the document image. In this algorithm, using a two-stage segmentation approach, regions of the image are detected, and then classified to document and non-document (pure region regions in the hierarchical classification. In this paper, a novel valuable definition is proposed to classify document image in to valuable or invaluable categories. The proposed algorithm is evaluated on a database consisting of the document and non-document image that provide from Internet. Experimental results show the efficiency of the proposed algorithm in the semantic document image classification. The proposed algorithm provides accuracy rate of 98.8% for valuable and invaluable document image classification problem.

  18. Velocity estimation of an airplane through a single satellite image

    Institute of Scientific and Technical Information of China (English)

    Zhuxin Zhao; Gongjian Wen; Bingwei Hui; Deren Li

    2012-01-01

    The motion information of a moving target can be recorded in a single image by a push-broom satellite. A push-broom satellite image is composed of many image lines sensed at different time instants. A method to estimate the velocity of a flying airplane from a single image based on the imagery model of the linear push-broom sensor is proposed. Some key points on the high-resolution image of the plane are chosen to determine the velocity (speed and direction). The performance of the method is tested and verified by experiments using a WorldView-1 image.%The motion information of a moving target can be recorded in a single image by a push-broom satellite.A push-broom satellite image is composed of many image lines sensed at different time instants.A method to estimate the velocity of a flying airplane from a single image based on the imagery model of the linear push-broom sensor is proposed.Some key points on the high-resolution image of the plane are chosen to determine the velocity (speed and direction).The performance of the method is tested and verified by experiments using a WorldView-1 image.

  19. Using Fuzzy SOM Strategy for Satellite Image Retrieval and Information Mining

    Directory of Open Access Journals (Sweden)

    Yo-Ping Huang

    2008-02-01

    Full Text Available This paper proposes an efficient satellite image retrieval and knowledge discovery model. The strategy comprises two major parts. First, a computational algorithm is used for off-line satellite image feature extraction, image data representation and image retrieval. Low level features are automatically extracted from the segmented regions of satellite images. A self-organization feature map is used to construct a two-layer satellite image concept hierarchy. The events are stored in one layer and the corresponding feature vectors are categorized in the other layer. Second, a user friendly interface is provided that retrieves images of interest and mines useful information based on the events in the concept hierarchy. The proposed system is evaluated with prominent features such as typhoons or high-pressure masses.

  20. CLOUD DETECTION OF OPTICAL SATELLITE IMAGES USING SUPPORT VECTOR MACHINE

    Directory of Open Access Journals (Sweden)

    K.-Y. Lee

    2016-06-01

    Full Text Available Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012 uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+ and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate

  1. Cloud Detection of Optical Satellite Images Using Support Vector Machine

    Science.gov (United States)

    Lee, Kuan-Yi; Lin, Chao-Hung

    2016-06-01

    Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection

  2. South Florida Everglades: satellite image map

    Science.gov (United States)

    Jones, John W.; Thomas, Jean-Claude; Desmond, G.B.

    2001-01-01

    These satellite image maps are one product of the USGS Land Characteristics from Remote Sensing project, funded through the USGS Place-Based Studies Program (http://access.usgs.gov/) with support from the Everglades National Park (http://www.nps.gov/ever/). The objective of this project is to develop and apply innovative remote sensing and geographic information system techniques to map the distribution of vegetation, vegetation characteristics, and related hydrologic variables through space and over time. The mapping and description of vegetation characteristics and their variations are necessary to accurately simulate surface hydrology and other surface processes in South Florida and to monitor land surface changes. As part of this research, data from many airborne and satellite imaging systems have been georeferenced and processed to facilitate data fusion and analysis. These image maps were created using image fusion techniques developed as part of this project.

  3. The best printing methods to print satellite images

    Directory of Open Access Journals (Sweden)

    G.A. Yousif

    2011-12-01

    In this paper different printing systems were used to print an image of SPOT-4 satellite, caver part of Sharm Elshekh area, Sinai, Egypt, on the same type of paper as much as possible, especially in the photography. This step is followed by measuring the experimental data, and analyzed colors to determine the best printing systems for satellite image printing data. The laser system is the more printing system where produce a wider range of color and highest densities of ink and access much color detail. Followed by the offset system which it recorded the best dot gain. Moreover, the study shows that it can use the advantages of each method according to the satellite image color and quantity to be produced.

  4. Shadow detection and removal in RGB VHR images for land use unsupervised classification

    Science.gov (United States)

    Movia, A.; Beinat, A.; Crosilla, F.

    2016-09-01

    Nowadays, high resolution aerial images are widely available thanks to the diffusion of advanced technologies such as UAVs (Unmanned Aerial Vehicles) and new satellite missions. Although these developments offer new opportunities for accurate land use analysis and change detection, cloud and terrain shadows actually limit benefits and possibilities of modern sensors. Focusing on the problem of shadow detection and removal in VHR color images, the paper proposes new solutions and analyses how they can enhance common unsupervised classification procedures for identifying land use classes related to the CO2 absorption. To this aim, an improved fully automatic procedure has been developed for detecting image shadows using exclusively RGB color information, and avoiding user interaction. Results show a significant accuracy enhancement with respect to similar methods using RGB based indexes. Furthermore, novel solutions derived from Procrustes analysis have been applied to remove shadows and restore brightness in the images. In particular, two methods implementing the so called "anisotropic Procrustes" and the "not-centered oblique Procrustes" algorithms have been developed and compared with the linear correlation correction method based on the Cholesky decomposition. To assess how shadow removal can enhance unsupervised classifications, results obtained with classical methods such as k-means, maximum likelihood, and self-organizing maps, have been compared to each other and with a supervised clustering procedure.

  5. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  6. The best printing methods to print satellite images

    OpenAIRE

    G.A. Yousif; R.Sh. Mohamed

    2011-01-01

    Printing systems operate in general as a system of color its color scale is limited as compared with the system color satellite images. Satellite image is building from very small cell named pixel, which represents the picture element and the unity of color when the image is displayed on the screen, this unit becomes lesser in size and called screen point. This unit posseses different size and shape from the method of printing to another, depending on the output resolution, tools and material...

  7. A Novel Approach to Developing a Supervised Spatial Decision Support System for Image Classification: A Study of Paddy Rice Investigation

    Directory of Open Access Journals (Sweden)

    Shih-Hsun Chang

    2014-01-01

    Full Text Available Paddy rice area estimation via remote sensing techniques has been well established in recent years. Texture information and vegetation indicators are widely used to improve the classification accuracy of satellite images. Accordingly, this study employs texture information and vegetation indicators as ancillary information for classifying paddy rice through remote sensing images. In the first stage, the images are attained using a remote sensing technique and ancillary information is employed to increase the accuracy of classification. In the second stage, we decide to construct an efficient supervised classifier, which is used to evaluate the ancillary information. In the third stage, linear discriminant analysis (LDA is introduced. LDA is a well-known method for classifying images to various categories. Also, the particle swarm optimization (PSO algorithm is employed to optimize the LDA classification outcomes and increase classification performance. In the fourth stage, we discuss the strategy of selecting different window sizes and analyze particle numbers and iteration numbers with corresponding accuracy. Accordingly, a rational strategy for the combination of ancillary information is introduced. Afterwards, the PSO algorithm improves the accuracy rate from 82.26% to 89.31%. The improved accuracy results in a much lower salt-and-pepper effect in the thematic map.

  8. Visualization and classification in biomedical terahertz pulsed imaging

    International Nuclear Information System (INIS)

    Loeffler, Torsten; Siebert, Karsten; Czasch, Stephanie; Bauer, Tobias; Roskos, Hartmut G

    2002-01-01

    'Visualization' in imaging is the process of extracting useful information from raw data in such a way that meaningful physical contrasts are developed. 'Classification' is the subsequent process of defining parameter ranges which allow us to identify elements of images such as different tissues or different objects. In this paper, we explore techniques for visualization and classification in terahertz pulsed imaging (TPI) for biomedical applications. For archived (formalin-fixed, alcohol-dehydrated and paraffin-mounted) test samples, we investigate both time- and frequency-domain methods based on bright- and dark-field TPI. Successful tissue classification is demonstrated

  9. A simple semi-automatic approach for land cover classification from multispectral remote sensing imagery.

    Directory of Open Access Journals (Sweden)

    Dong Jiang

    Full Text Available Land cover data represent a fundamental data source for various types of scientific research. The classification of land cover based on satellite data is a challenging task, and an efficient classification method is needed. In this study, an automatic scheme is proposed for the classification of land use using multispectral remote sensing images based on change detection and a semi-supervised classifier. The satellite image can be automatically classified using only the prior land cover map and existing images; therefore human involvement is reduced to a minimum, ensuring the operability of the method. The method was tested in the Qingpu District of Shanghai, China. Using Environment Satellite 1(HJ-1 images of 2009 with 30 m spatial resolution, the areas were classified into five main types of land cover based on previous land cover data and spectral features. The results agreed on validation of land cover maps well with a Kappa value of 0.79 and statistical area biases in proportion less than 6%. This study proposed a simple semi-automatic approach for land cover classification by using prior maps with satisfied accuracy, which integrated the accuracy of visual interpretation and performance of automatic classification methods. The method can be used for land cover mapping in areas lacking ground reference information or identifying rapid variation of land cover regions (such as rapid urbanization with convenience.

  10. Unsupervised feature learning for autonomous rock image classification

    Science.gov (United States)

    Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond

    2017-09-01

    Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.

  11. Decision Fusion Based on Hyperspectral and Multispectral Satellite Imagery for Accurate Forest Species Mapping

    Directory of Open Access Journals (Sweden)

    Dimitris G. Stavrakoudis

    2014-07-01

    Full Text Available This study investigates the effectiveness of combining multispectral very high resolution (VHR and hyperspectral satellite imagery through a decision fusion approach, for accurate forest species mapping. Initially, two fuzzy classifications are conducted, one for each satellite image, using a fuzzy output support vector machine (SVM. The classification result from the hyperspectral image is then resampled to the multispectral’s spatial resolution and the two sources are combined using a simple yet efficient fusion operator. Thus, the complementary information provided from the two sources is effectively exploited, without having to resort to computationally demanding and time-consuming typical data fusion or vector stacking approaches. The effectiveness of the proposed methodology is validated in a complex Mediterranean forest landscape, comprising spectrally similar and spatially intermingled species. The decision fusion scheme resulted in an accuracy increase of 8% compared to the classification using only the multispectral imagery, whereas the increase was even higher compared to the classification using only the hyperspectral satellite image. Perhaps most importantly, its accuracy was significantly higher than alternative multisource fusion approaches, although the latter are characterized by much higher computation, storage, and time requirements.

  12. A Spectral-Texture Kernel-Based Classification Method for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-11-01

    Full Text Available Classification of hyperspectral images always suffers from high dimensionality and very limited labeled samples. Recently, the spectral-spatial classification has attracted considerable attention and can achieve higher classification accuracy and smoother classification maps. In this paper, a novel spectral-spatial classification method for hyperspectral images by using kernel methods is investigated. For a given hyperspectral image, the principle component analysis (PCA transform is first performed. Then, the first principle component of the input image is segmented into non-overlapping homogeneous regions by using the entropy rate superpixel (ERS algorithm. Next, the local spectral histogram model is applied to each homogeneous region to obtain the corresponding texture features. Because this step is performed within each homogenous region, instead of within a fixed-size image window, the obtained local texture features in the image are more accurate, which can effectively benefit the improvement of classification accuracy. In the following step, a contextual spectral-texture kernel is constructed by combining spectral information in the image and the extracted texture information using the linearity property of the kernel methods. Finally, the classification map is achieved by the support vector machines (SVM classifier using the proposed spectral-texture kernel. Experiments on two benchmark airborne hyperspectral datasets demonstrate that our method can effectively improve classification accuracies, even though only a very limited training sample is available. Specifically, our method can achieve from 8.26% to 15.1% higher in terms of overall accuracy than the traditional SVM classifier. The performance of our method was further compared to several state-of-the-art classification methods of hyperspectral images using objective quantitative measures and a visual qualitative evaluation.

  13. ASTER 2002-2003 Kansas Satellite Image Database (KSID)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Satellite Image Database (KSID):2002-2003 consists of image data gathered by three sensors. The first image data are terrain-corrected, precision...

  14. MODIS 2002-2003 Kansas Satellite Image Database (KSID)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Satellite Image Database (KSID):2002-2003 consists of image data gathered by three sensors. The first image data are terrain-corrected, precision...

  15. Multispectral Image classification using the theories of neural networks

    International Nuclear Information System (INIS)

    Ardisasmita, M.S.; Subki, M.I.R.

    1997-01-01

    Image classification is the one of the important part of digital image analysis. the objective of image classification is to identify and regroup the features occurring in an image into one or several classes in terms of the object. basic to the understanding of multispectral classification is the concept of the spectral response of an object as a function of the electromagnetic radiation and the wavelength of the spectrum. new approaches to classification has been developed to improve the result of analysis, these state-of-the-art classifiers are based upon the theories of neural networks. Neural network classifiers are algorithmes which mimic the computational abilities of the human brain. Artificial neurons are simple emulation's of biological neurons; they take in information from sensors or other artificial neurons, perform very simple operations on this data, and pass the result to other recognize the spectral signature of each image pixel. Neural network image classification has been divided into supervised and unsupervised training procedures. In the supervised approach, examples of each cover type can be located and the computer can compute spectral signatures to categorize all pixels in a digital image into several land cover classes. In supervised classification, spectral signatures are generated by mathematically grouping and it does not require analyst-specified training data. Thus, in the supervised approach we define useful information categories and then examine their spectral reparability; in the unsupervised approach the computer determines spectrally sapable classes and then we define thei information value

  16. Involvement of Machine Learning for Breast Cancer Image Classification: A Survey

    OpenAIRE

    Nahid, Abdullah-Al; Kong, Yinan

    2017-01-01

    Breast cancer is one of the largest causes of women’s death in the world today. Advance engineering of natural image classification techniques and Artificial Intelligence methods has largely been used for the breast-image classification task. The involvement of digital image classification allows the doctor and the physicians a second opinion, and it saves the doctors’ and physicians’ time. Despite the various publications on breast image classification, very few review papers are available w...

  17. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  18. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  19. An Image Matching Algorithm Integrating Global SRTM and Image Segmentation for Multi-Source Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Xiao Ling

    2016-08-01

    Full Text Available This paper presents a novel image matching method for multi-source satellite images, which integrates global Shuttle Radar Topography Mission (SRTM data and image segmentation to achieve robust and numerous correspondences. This method first generates the epipolar lines as a geometric constraint assisted by global SRTM data, after which the seed points are selected and matched. To produce more reliable matching results, a region segmentation-based matching propagation is proposed in this paper, whereby the region segmentations are extracted by image segmentation and are considered to be a spatial constraint. Moreover, a similarity measure integrating Distance, Angle and Normalized Cross-Correlation (DANCC, which considers geometric similarity and radiometric similarity, is introduced to find the optimal correspondences. Experiments using typical satellite images acquired from Resources Satellite-3 (ZY-3, Mapping Satellite-1, SPOT-5 and Google Earth demonstrated that the proposed method is able to produce reliable and accurate matching results.

  20. Electronic structure classifications using scanning tunneling microscopy conductance imaging

    International Nuclear Information System (INIS)

    Horn, K.M.; Swartzentruber, B.S.; Osbourn, G.C.; Bouchard, A.; Bartholomew, J.W.

    1998-01-01

    The electronic structure of atomic surfaces is imaged by applying multivariate image classification techniques to multibias conductance data measured using scanning tunneling microscopy. Image pixels are grouped into classes according to shared conductance characteristics. The image pixels, when color coded by class, produce an image that chemically distinguishes surface electronic features over the entire area of a multibias conductance image. Such open-quotes classedclose quotes images reveal surface features not always evident in a topograph. This article describes the experimental technique used to record multibias conductance images, how image pixels are grouped in a mathematical, classification space, how a computed grouping algorithm can be employed to group pixels with similar conductance characteristics in any number of dimensions, and finally how the quality of the resulting classed images can be evaluated using a computed, combinatorial analysis of the full dimensional space in which the classification is performed. copyright 1998 American Institute of Physics

  1. MERGING AIRBORNE LIDAR DATA AND SATELLITE SAR DATA FOR BUILDING CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    T. Yamamoto

    2015-05-01

    Full Text Available A frequent map revision is required in GIS applications, such as disaster prevention and urban planning. In general, airborne photogrammetry and LIDAR measurements are applied to geometrical data acquisition for automated map generation and revision. However, attribute data acquisition and classification depend on manual editing works including ground surveys. In general, airborne photogrammetry and LiDAR measurements are applied to geometrical data acquisition for automated map generation and revision. However, these approaches classify geometrical attributes. Moreover, ground survey and manual editing works are finally required in attribute data classification. On the other hand, although geometrical data extraction is difficult, SAR data have a possibility to automate the attribute data acquisition and classification. The SAR data represent microwave reflections on various surfaces of ground and buildings. There are many researches related to monitoring activities of disaster, vegetation, and urban. Moreover, we have an opportunity to acquire higher resolution data in urban areas with new sensors, such as ALOS2 PALSAR2. Therefore, in this study, we focus on an integration of airborne LIDAR data and satellite SAR data for building extraction and classification.

  2. Artificial neural net system for interactive tissue classification with MR imaging and image segmentation

    International Nuclear Information System (INIS)

    Clarke, L.P.; Silbiger, M.; Naylor, C.; Brown, K.

    1990-01-01

    This paper reports on the development of interactive methods for MR tissue classification that permit mathematically rigorous methods for three-dimensional image segmentation and automatic organ/tumor contouring, as required for surgical and RTP planning. The authors investigate a number of image-intensity based tissue- classification methods that make no implicit assumptions on the MR parameters and hence are not limited by image data set. Similarly, we have trained artificial neural net (ANN) systems for both supervised and unsupervised tissue classification

  3. Angular difference feature extraction for urban scene classification using ZY-3 multi-angle high-resolution satellite imagery

    Science.gov (United States)

    Huang, Xin; Chen, Huijun; Gong, Jianya

    2018-01-01

    Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed

  4. Image Classification Workflow Using Machine Learning Methods

    Science.gov (United States)

    Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.

    2016-12-01

    Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.

  5. Smoothing of Fused Spectral Consistent Satellite Images with TV-based Edge Detection

    DEFF Research Database (Denmark)

    Sveinsson, Johannes; Aanæs, Henrik; Benediktsson, Jon Atli

    2007-01-01

    based on satellite data. Additionally, most conventional methods are loosely connected to the image forming physics of the satellite image, giving these methods an ad hoc feel. Vesteinsson et al. [1] proposed a method of fusion of satellite images that is based on the properties of imaging physics...... in a statistically meaningful way and was called spectral consistent panshapening (SCP). In this paper we improve this framework for satellite image fusion by introducing a better image prior, via data-dependent image smoothing. The dependency is obtained via total variation edge detection method.......Several widely used methods have been proposed for fusing high resolution panchromatic data and lower resolution multi-channel data. However, many of these methods fail to maintain the spectral consistency of the fused high resolution image, which is of high importance to many of the applications...

  6. Slope mass movements on SPOT satellite images: A case of the Železniki area (W Slovenia after flash floods in September 2007

    Directory of Open Access Journals (Sweden)

    Mateja Jemec

    2008-12-01

    Full Text Available Flash floods in Slovenia, which was exposed on September 18th 2007, demanded 6 lives, several thousand houses and over one thousand kilometres of roads were damaged and more also than 50 bridges. The highest amount of rain fell at west and north-west parts of Slovenia (northern Primorska region and southern Gorenjska region,from where heavy rain spread eastwards over the central Slovenia and in east part of Slovenia. In the article we focused on area of western and north-western part of Slovenia. The aim of present research was in the first phase to describe methodology to determine landslide occurrences from satellite images before and after natural disaster on Železniki region. Second phase was based on comparison of obtained results with the existing models for prediction of slope mass movements, and finally also to determine identificability of landslide types on a satellite image.Results have shown, that the highest part of obtaining area from supervised and unsupervised classification of satellite images, are comparable with classes of landslide susceptibility, where occurrences of landslide are largest.

  7. BEE FORAGE MAPPING BASED ON MULTISPECTRAL IMAGES LANDSAT

    Directory of Open Access Journals (Sweden)

    A. Moskalenko

    2016-10-01

    Full Text Available Possibilities of bee forage identification and mapping based on multispectral images have been shown in the research. Spectral brightness of bee forage has been determined with the use of satellite images. The effectiveness of some methods of image classification for mapping of bee forage is shown. Keywords: bee forage, mapping, multispectral images, image classification.

  8. Satellite-generated radar images of the earth

    International Nuclear Information System (INIS)

    Schanda, E.

    1980-01-01

    The Synthetic Aperture Radar (SAR) on board of SEASAT was the first non-military satellite-borne radar producing high-resolution images of the earth. Several examples of European scenes are discussed to demonstrate the properties of presently available optically processes images. (orig.)

  9. Retrieval and classification of food images.

    Science.gov (United States)

    Farinella, Giovanni Maria; Allegra, Dario; Moltisanti, Marco; Stanco, Filippo; Battiato, Sebastiano

    2016-10-01

    Automatic food understanding from images is an interesting challenge with applications in different domains. In particular, food intake monitoring is becoming more and more important because of the key role that it plays in health and market economies. In this paper, we address the study of food image processing from the perspective of Computer Vision. As first contribution we present a survey of the studies in the context of food image processing from the early attempts to the current state-of-the-art methods. Since retrieval and classification engines able to work on food images are required to build automatic systems for diet monitoring (e.g., to be embedded in wearable cameras), we focus our attention on the aspect of the representation of the food images because it plays a fundamental role in the understanding engines. The food retrieval and classification is a challenging task since the food presents high variableness and an intrinsic deformability. To properly study the peculiarities of different image representations we propose the UNICT-FD1200 dataset. It was composed of 4754 food images of 1200 distinct dishes acquired during real meals. Each food plate is acquired multiple times and the overall dataset presents both geometric and photometric variabilities. The images of the dataset have been manually labeled considering 8 categories: Appetizer, Main Course, Second Course, Single Course, Side Dish, Dessert, Breakfast, Fruit. We have performed tests employing different representations of the state-of-the-art to assess the related performances on the UNICT-FD1200 dataset. Finally, we propose a new representation based on the perceptual concept of Anti-Textons which is able to encode spatial information between Textons outperforming other representations in the context of food retrieval and Classification. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Application of SVM on satellite images to detect hotspots in Jharia coal field region of India

    Energy Technology Data Exchange (ETDEWEB)

    Gautam, R.S.; Singh, D.; Mittal, A.; Sajin, P. [Indian Institute for Technology, Roorkee (India)

    2008-07-01

    The present paper deals with the application of Support Vector Machine (SVM) and image analysis techniques on NOAA/AVHRR satellite image to detect hotspots on the Jharia coal field region of India. One of the major advantages of using these satellite data is that the data are free with very good temporal resolution; while, one drawback is that these have low spatial resolution (i.e., approximately 1.1 km at nadir). Therefore, it is important to do research by applying some efficient optimization techniques along with the image analysis techniques to rectify these drawbacks and use satellite images for efficient hotspot detection and monitoring. For this purpose, SVM and multi-threshold techniques are explored for hotspot detection. The multi-threshold algorithm is developed to remove the cloud coverage from the land coverage. This algorithm also highlights the hotspots or fire spots in the suspected regions. SVM has the advantage over multi-thresholding technique that it can learn patterns from the examples and therefore is used to optimize the performance by removing the false points which are highlighted in the threshold technique. Both approaches can be used separately or in combination depending on the size of the image. The RBF (Radial Basis Function) kernel is used in training of three sets of inputs: brightness temperature of channel 3, Normalized Difference Vegetation Index (NDVI) and Global Environment Monitoring Index (GEMI), respectively. This makes a classified image in the output that highlights the hotspot and non-hotspot pixels. The performance of the SVM is also compared with the performance obtained from the neural networks and SVM appears to detect hotspots more accurately (greater than 91% classification accuracy) with lesser false alarm rate. The results obtained are found to be in good agreement with the ground based observations of the hotspots.

  11. Study on remote sensing method for drawing up and utilizing ecological and natural map II; concentrated on drawing up a plant ecological classification map

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Seong Woo; Chung, Hwui Chul [Korea Environment Institute, Seoul (Korea)

    1999-12-01

    Following with the flows of the environmental conservation, Korea has revised the law of natural environmental conservation. In this law, it has suggested to draw up an ecological nature figure for efficient preservation and utilization of a country. To draw up an ecological nature figure, it requires several evaluating factors. Among them, a plant ecological classification is a very important evaluating factor since it can evaluate a habitation area of natural organisms. This study investigated a drawing up method of plant ecological classification using satellite image data. However the limit of satellite image data and the quality of required plant ecological classification are not quite matched but if the satellite image data and the infrared color aerial photograph are mixed, it can be expected to have an excellent quality of plant ecological classification. 85 refs., 86 figs., 45 tabs.

  12. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    Science.gov (United States)

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  13. Evaluation of the Chinese Fine Spatial Resolution Hyperspectral Satellite TianGong-1 in Urban Land-Cover Classification

    Directory of Open Access Journals (Sweden)

    Xueke Li

    2016-05-01

    Full Text Available The successful launch of the Chinese high spatial resolution hyperspectral satellite TianGong-1 (TG-1 opens up new possibilities for applications of remotely-sensed satellite imagery. One of the main goals of the TG-1 mission is to provide observations of surface attributes at local and landscape spatial scales to map urban land cover accurately using the hyperspectral technique. This study attempted to evaluate the TG-1 datasets for urban feature analysis, using existing data over Beijing, China, by comparing the TG-1 (with a spatial resolution of 10 m to EO-1 Hyperion (with a spatial resolution of 30 m. The spectral feature of TG-1 was first analyzed and, thus, finding out optimal hyperspectral wavebands useful for the discrimination of urban areas. Based on this, the pixel-based maximum likelihood classifier (PMLC, pixel-based support vector machine (PSVM, hybrid maximum likelihood classifier (HMLC, and hybrid support vector machine (HSVM were implemented, as well as compared in the application of mapping urban land cover types. The hybrid classifier approach, which integrates the pixel-based classifier and the object-based segmentation approach, was demonstrated as an effective alternative to the conventional pixel-based classifiers for processing the satellite hyperspectral data, especially the fine spatial resolution data. For TG-1 imagery, the pixel-based urban classification was obtained with an average overall accuracy of 89.1%, whereas the hybrid urban classification was obtained with an average overall accuracy of 91.8%. For Hyperion imagery, the pixel-based urban classification was obtained with an average overall accuracy of 85.9%, whereas the hybrid urban classification was obtained with an average overall accuracy of 86.7%. Overall, it can be concluded that the fine spatial resolution satellite hyperspectral data TG-1 is promising in delineating complex urban scenes, especially when using an appropriate classifier, such as the

  14. Model-based satellite image fusion

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Sveinsson, J. R.; Nielsen, Allan Aasbjerg

    2008-01-01

    A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel...... neighborhood regularization is presented. This framework enables the formulation of the regularization in a way that corresponds well with our prior assumptions of the image data. The proposed method is validated and compared with other approaches on several data sets. Lastly, the intensity......-hue-saturation method is revisited in order to gain additional insight of what implications the spectral consistency has for an image fusion method....

  15. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    Science.gov (United States)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  16. Attribute Learning for SAR Image Classification

    Directory of Open Access Journals (Sweden)

    Chu He

    2017-04-01

    Full Text Available This paper presents a classification approach based on attribute learning for high spatial resolution Synthetic Aperture Radar (SAR images. To explore the representative and discriminative attributes of SAR images, first, an iterative unsupervised algorithm is designed to cluster in the low-level feature space, where the maximum edge response and the ratio of mean-to-variance are included; a cross-validation step is applied to prevent overfitting. Second, the most discriminative clustering centers are sorted out to construct an attribute dictionary. By resorting to the attribute dictionary, a representation vector describing certain categories in the SAR image can be generated, which in turn is used to perform the classifying task. The experiments conducted on TerraSAR-X images indicate that those learned attributes have strong visual semantics, which are characterized by bright and dark spots, stripes, or their combinations. The classification method based on these learned attributes achieves better results.

  17. Deep learning for image classification

    Science.gov (United States)

    McCoppin, Ryan; Rizki, Mateen

    2014-06-01

    This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.

  18. Fast Image Texture Classification Using Decision Trees

    Science.gov (United States)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  19. Interferometric Imaging of Geostationary Satellites: Signal-to-Noise Considerations

    Science.gov (United States)

    Jorgensen, A.; Schmitt, H.; Mozurkewich, D.; Armstrong, J.; Restaino, S.; Hindsley, R.

    2011-09-01

    Geostationary satellites are generally too small to image at high resolution with conventional single-dish telescopes. Obtaining many resolution elements across a typical geostationary satellite body requires a single-dish telescope with a diameter of 10’s of m or more, with a good adaptive optics system. An alternative is to use an optical/infrared interferometer consisting of multiple smaller telescopes in an array configuration. In this paper and companion papers1, 2 we discuss the performance of a common-mount 30-element interferometer. The instrument design is presented by Mozurkewich et al.,1 and imaging performance is presented by Schmitt et al.2 In this paper we discuss signal-to-noise ratio for both fringe-tracking and imaging. We conclude that the common-mount interferometer is sufficiently sensitive to track fringes on the majority of geostationary satellites. We also find that high-fidelity images can be obtained after a short integration time of a few minutes to a few tens of minutes.

  20. Image Classification Based on Convolutional Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2017-01-01

    Full Text Available Image classification aims to group images into corresponding semantic categories. Due to the difficulties of interclass similarity and intraclass variability, it is a challenging issue in computer vision. In this paper, an unsupervised feature learning approach called convolutional denoising sparse autoencoder (CDSAE is proposed based on the theory of visual attention mechanism and deep learning methods. Firstly, saliency detection method is utilized to get training samples for unsupervised feature learning. Next, these samples are sent to the denoising sparse autoencoder (DSAE, followed by convolutional layer and local contrast normalization layer. Generally, prior in a specific task is helpful for the task solution. Therefore, a new pooling strategy—spatial pyramid pooling (SPP fused with center-bias prior—is introduced into our approach. Experimental results on the common two image datasets (STL-10 and CIFAR-10 demonstrate that our approach is effective in image classification. They also demonstrate that none of these three components: local contrast normalization, SPP fused with center-prior, and l2 vector normalization can be excluded from our proposed approach. They jointly improve image representation and classification performance.

  1. Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding

    Science.gov (United States)

    Moody, Daniela; Wohlberg, Brendt

    2018-01-02

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.

  2. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification

    Science.gov (United States)

    Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.

    2018-06-01

    The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.

  3. Classifications of objects on hyperspectral images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    . In the present work a classification method that combines classic image classification approach and MIA is proposed. The basic idea is to group all pixels and calculate spectral properties of the pixel group to be used further as a vector of predictors for calibration and class prediction. The grouping can...... be done with mathematical morphology methods applied to a score image where objects are well separated. In the case of small overlapping a watershed transformation can be applied to disjoint the objects. The method has been tested on several simulated and real cases and showed good results and significant...... improvements in comparison with a standard MIA approach. The results as well as method details will be reported....

  4. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  5. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies

    Science.gov (United States)

    Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad

    2018-06-01

    Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.

  6. A Subpixel Classification of Multispectral Satellite Imagery for Interpetation of Tundra-Taiga Ecotone Vegetation (Case Study on Tuliok River Valley, Khibiny, Russia)

    Science.gov (United States)

    Mikheeva, A. I.; Tutubalina, O. V.; Zimin, M. V.; Golubeva, E. I.

    2017-12-01

    The tundra-taiga ecotone plays significant role in northern ecosystems. Due to global climatic changes, the vegetation of the ecotone is the key object of many remote-sensing studies. The interpretation of vegetation and nonvegetation objects of the tundra-taiga ecotone on satellite imageries of a moderate resolution is complicated by the difficulty of extracting these objects from the spectral and spatial mixtures within a pixel. This article describes a method for the subpixel classification of Terra ASTER satellite image for vegetation mapping of the tundra-taiga ecotone in the Tuliok River, Khibiny Mountains, Russia. It was demonstrated that this method allows to determine the position of the boundaries of ecotone objects and their abundance on the basis of quantitative criteria, which provides a more accurate characteristic of ecotone vegetation when compared to the per-pixel approach of automatic imagery interpretation.

  7. Polarimetric SAR image classification based on discriminative dictionary learning model

    Science.gov (United States)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  8. Object-Based Image Analysis of WORLDVIEW-2 Satellite Data for the Classification of Mangrove Areas in the City of SÃO LUÍS, MARANHÃO State, Brazil

    Science.gov (United States)

    Kux, H. J. H.; Souza, U. D. V.

    2012-07-01

    Taking into account the importance of mangrove environments for the biodiversity of coastal areas, the objective of this paper is to classify the different types of irregular human occupation on the areas of mangrove vegetation in São Luis, capital of Maranhão State, Brazil, considering the OBIA (Object-based Image Analysis) approach with WorldView-2 satellite data and using InterIMAGE, a free image analysis software. A methodology for the study of the area covered by mangroves at the northern portion of the city was proposed to identify the main targets of this area, such as: marsh areas (known locally as Apicum), mangrove forests, tidal channels, blockhouses (irregular constructions), embankments, paved streets and different condominiums. Initially a databank including information on the main types of occupation and environments was established for the area under study. An image fusion (multispectral bands with panchromatic band) was done, to improve the information content of WorldView-2 data. Following an ortho-rectification was made with the dataset used, in order to compare with cartographical data from the municipality, using Ground Control Points (GCPs) collected during field survey. Using the data mining software GEODMA, a series of attributes which characterize the targets of interest was established. Afterwards the classes were structured, a knowledge model was created and the classification performed. The OBIA approach eased mapping of such sensitive areas, showing the irregular occupations and embankments of mangrove forests, reducing its area and damaging the marine biodiversity.

  9. Cascade classification of endocytoscopic images of colorectal lesions for automated pathological diagnosis

    Science.gov (United States)

    Itoh, Hayato; Mori, Yuichi; Misawa, Masashi; Oda, Masahiro; Kudo, Shin-ei; Mori, Kensaku

    2018-02-01

    This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.

  10. Comparing three spaceborne optical sensors via fine scale pixel-based urban land cover classification products

    CSIR Research Space (South Africa)

    Breytenbach, Andre

    2013-08-01

    Full Text Available Accessibility to higher resolution earth observation satellites suggests an improvement in the potential for fine scale image classification. In this comparative study, imagery from three optical satellites (WorldView-2, Pléiades and RapidEye) were...

  11. Image Positioning Accuracy Analysis for Super Low Altitude Remote Sensing Satellites

    Directory of Open Access Journals (Sweden)

    Ming Xu

    2012-10-01

    Full Text Available Super low altitude remote sensing satellites maintain lower flight altitudes by means of ion propulsion in order to improve image resolution and positioning accuracy. The use of engineering data in design for achieving image positioning accuracy is discussed in this paper based on the principles of the photogrammetry theory. The exact line-of-sight rebuilding of each detection element and this direction precisely intersecting with the Earth's elliptical when the camera on the satellite is imaging are both ensured by the combined design of key parameters. These parameters include: orbit determination accuracy, attitude determination accuracy, camera exposure time, accurately synchronizing the reception of ephemeris with attitude data, geometric calibration and precise orbit verification. Precise simulation calculations show that image positioning accuracy of super low altitude remote sensing satellites is not obviously improved. The attitude determination error of a satellite still restricts its positioning accuracy.

  12. Fundamental Limitations for Imaging GEO Satellites

    Science.gov (United States)

    2015-10-18

    Fundamental limitations for imaging GEO satellites D. Mozurkewich Seabrook Engineering , Seabrook, MD 20706 USA H. R. Schmitt, J. T. Armstrong Naval...higher spatial frequency. Send correspondence to David Mozurkewich, Seabrook Engineering , 9310 Dubarry Ave., Seabrook MD 20706 E-mail: dave

  13. Medical image transmission via communication satellite. Evaluation of bone scintigraphy

    International Nuclear Information System (INIS)

    Suzuki, Hideki; Inoue, Tomio; Endo, Keigo; Shimamoto, Shigeru.

    1995-01-01

    As compared with terrestrial circuits, the communication satellite possesses superior characteristics such as wide area coverage, broadcasting, high capacity, and robustness to disasters. Utilizing the narrow band channel (64 kbps) of the geostationary satellite JCSAT 1 located at the altitude of 36,000 km above the equator, the authors investigated satellite-relayed medical imagings by video signals, with bone scintigraphy as a model. Each bone scintigraphy was taken by a handy-video camera, digitalized and transmitted from faculty of technology located at 25 kilometers apart from our department. Clear bone scintigraphy was obtained via satellite, as seen on the view box. Eight nuclear physicians evaluated 20 cases of bone scintigraphy. ROC (Receiver Operating Characteristic) analysis was performed between the scintigraphies on view box and via satellite by the rating method. The area under the ROC curve was 91.6±2.6% via satellite, and 93.2±2.4% on the view box and there was no significant difference between them. These results suggest that the satellite communication is very useful and effective system to send nuclear imagings to distant institutes. (author)

  14. [Medical image transmission via communication satellite: evaluation of bone scintigraphy].

    Science.gov (United States)

    Suzuki, H; Inoue, T; Endo, K; Shimamoto, S

    1995-10-01

    As compared with terrestrial circuits, the communication satellite possesses superior characteristics such as wide area coverage, broadcasting, high capacity, and robustness to disasters. Utilizing the narrow band channel (64 kbps) of the geostationary satellite JCSAT1 located at the altitude of 36,000 km above the equator, the authors investigated satellite-relayed medical images by video signals, with bone scintigraphy as a model. Each bone scintigraphy was taken by a handy-video camera, digitalized and transmitted from faculty of technology located at 25 kilometers apart from our department. Clear bone scintigraphy was obtained via satellite, as seen on the view box. Eight nuclear physicians evaluated 20 cases of bone scintigraphy. ROC (Receiver Operating Characteristic) analysis was performed between the scintigraphies on view box and via satellite by the rating method. The area under the ROC curve was 91.6 +/- 2.6% via satellite, and 93.2 +/- 2.4% on the view box and there was no significant difference between them. These results suggest that the satellite communication is very useful and effective system to send nuclear imagings to distant institutes.

  15. Improved Wetland Classification Using Eight-Band High Resolution Satellite Imagery and a Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Charles R. Lane

    2014-12-01

    Full Text Available Although remote sensing technology has long been used in wetland inventory and monitoring, the accuracy and detail level of wetland maps derived with moderate resolution imagery and traditional techniques have been limited and often unsatisfactory. We explored and evaluated the utility of a newly launched high-resolution, eight-band satellite system (Worldview-2; WV2 for identifying and classifying freshwater deltaic wetland vegetation and aquatic habitats in the Selenga River Delta of Lake Baikal, Russia, using a hybrid approach and a novel application of Indicator Species Analysis (ISA. We achieved an overall classification accuracy of 86.5% (Kappa coefficient: 0.85 for 22 classes of aquatic and wetland habitats and found that additional metrics, such as the Normalized Difference Vegetation Index and image texture, were valuable for improving the overall classification accuracy and particularly for discriminating among certain habitat classes. Our analysis demonstrated that including WV2’s four spectral bands from parts of the spectrum less commonly used in remote sensing analyses, along with the more traditional bandwidths, contributed to the increase in the overall classification accuracy by ~4% overall, but with considerable increases in our ability to discriminate certain communities. The coastal band improved differentiating open water and aquatic (i.e., vegetated habitats, and the yellow, red-edge, and near-infrared 2 bands improved discrimination among different vegetated aquatic and terrestrial habitats. The use of ISA provided statistical rigor in developing associations between spectral classes and field-based data. Our analyses demonstrated the utility of a hybrid approach and the benefit of additional bands and metrics in providing the first spatially explicit mapping of a large and heterogeneous wetland system.

  16. Deep learning for tumor classification in imaging mass spectrometry.

    Science.gov (United States)

    Behrmann, Jens; Etmann, Christian; Boskamp, Tobias; Casadonte, Rita; Kriegsmann, Jörg; Maaß, Peter

    2018-04-01

    Tumor classification using imaging mass spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Since mass spectra exhibit certain structural similarities to image data, deep learning may offer a promising strategy for classification of IMS data as it has been successfully applied to image classification. Methodologically, we propose an adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis. The proposed methods are evaluated on two algorithmically challenging tumor classification tasks and compared to a baseline approach. Competitiveness of the proposed methods is shown on both tasks by studying the performance via cross-validation. Moreover, the learned models are analyzed by the proposed sensitivity analysis revealing biologically plausible effects as well as confounding factors of the considered tasks. Thus, this study may serve as a starting point for further development of deep learning approaches in IMS classification tasks. https://gitlab.informatik.uni-bremen.de/digipath/Deep_Learning_for_Tumor_Classification_in_IMS. jbehrmann@uni-bremen.de or christianetmann@uni-bremen.de. Supplementary data are available at Bioinformatics online.

  17. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    Science.gov (United States)

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  18. A review of supervised object-based land-cover image classification

    Science.gov (United States)

    Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue

    2017-08-01

    Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial

  19. Hyperspectral image classification based on local binary patterns and PCANet

    Science.gov (United States)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  20. Classification of quantitative light-induced fluorescence images using convolutional neural network

    NARCIS (Netherlands)

    Imangaliyev, S.; van der Veen, M.H.; Volgenant, C.M.C.; Loos, B.G.; Keijser, B.J.F.; Crielaard, W.; Levin, E.; Lintas, A.; Rovetta, S.; Verschure, P.F.M.J.; Villa, A.E.P.

    2017-01-01

    Images are an important data source for diagnosis of oral diseases. The manual classification of images may lead to suboptimal treatment procedures due to subjective errors. In this paper an image classification algorithm based on Deep Learning framework is applied to Quantitative Light-induced

  1. Kansas Satellite Image Database (KSID) 2004-2005

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Satellite Image Database (KSID) 2004-2005 consists of terrain-corrected, precision rectified spring, summer, and fall Landsat 5 Thematic Mapper (TM)...

  2. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    Science.gov (United States)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  3. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    Science.gov (United States)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  4. A hierarchical classification scheme of psoriasis images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A two-stage hierarchical classification scheme of psoriasis lesion images is proposed. These images are basically composed of three classes: normal skin, lesion and background. The scheme combines conventional tools to separate the skin from the background in the first stage, and the lesion from...

  5. Mapping soil heterogeneity using RapidEye satellite images

    Science.gov (United States)

    Piccard, Isabelle; Eerens, Herman; Dong, Qinghan; Gobin, Anne; Goffart, Jean-Pierre; Curnel, Yannick; Planchon, Viviane

    2016-04-01

    In the frame of BELCAM, a project funded by the Belgian Science Policy Office (BELSPO), researchers from UCL, ULg, CRA-W and VITO aim to set up a collaborative system to develop and deliver relevant information for agricultural monitoring in Belgium. The main objective is to develop remote sensing methods and processing chains able to ingest crowd sourcing data, provided by farmers or associated partners, and to deliver in return relevant and up-to-date information for crop monitoring at the field and district level based on Sentinel-1 and -2 satellite imagery. One of the developments within BELCAM concerns an automatic procedure to detect soil heterogeneity within a parcel using optical high resolution images. Such heterogeneity maps can be used to adjust farming practices according to the detected heterogeneity. This heterogeneity may for instance be caused by differences in mineral composition of the soil, organic matter content, soil moisture or soil texture. Local differences in plant growth may be indicative for differences in soil characteristics. As such remote sensing derived vegetation indices may be used to reveal soil heterogeneity. VITO started to delineate homogeneous zones within parcels by analyzing a series of RapidEye images acquired in 2015 (as a precursor for Sentinel-2). Both unsupervised classification (ISODATA, K-means) and segmentation techniques were tested. Heterogeneity maps were generated from images acquired at different moments during the season (13 May, 30 June, 17 July, 31 August, 11 September and 1 November 2015). Tests were performed using blue, green, red, red edge and NIR reflectances separately and using derived indices such as NDVI, fAPAR, CIrededge, NDRE2. The results for selected winter wheat, maize and potato fields were evaluated together with experts from the collaborating agricultural research centers. For a few fields UAV images and/or yield measurements were available for comparison.

  6. Statistical methods for segmentation and classification of images

    DEFF Research Database (Denmark)

    Rosholm, Anders

    1997-01-01

    The central matter of the present thesis is Bayesian statistical inference applied to classification of images. An initial review of Markov Random Fields relates to the modeling aspect of the indicated main subject. In that connection, emphasis is put on the relatively unknown sub-class of Pickard...... with a Pickard Random Field modeling of a considered (categorical) image phenomemon. An extension of the fast PRF based classification technique is presented. The modification introduces auto-correlation into the model of an involved noise process, which previously has been assumed independent. The suitability...... of the extended model is documented by tests on controlled image data containing auto-correlated noise....

  7. Schedule Optimization of Imaging Missions for Multiple Satellites and Ground Stations Using Genetic Algorithm

    Science.gov (United States)

    Lee, Junghyun; Kim, Heewon; Chung, Hyun; Kim, Haedong; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee

    2018-04-01

    In this paper, we propose a method that uses a genetic algorithm for the dynamic schedule optimization of imaging missions for multiple satellites and ground systems. In particular, the visibility conflicts of communication and mission operation using satellite resources (electric power and onboard memory) are integrated in sequence. Resource consumption and restoration are considered in the optimization process. Image acquisition is an essential part of satellite missions and is performed via a series of subtasks such as command uplink, image capturing, image storing, and image downlink. An objective function for optimization is designed to maximize the usability by considering the following components: user-assigned priority, resource consumption, and image-acquisition time. For the simulation, a series of hypothetical imaging missions are allocated to a multi-satellite control system comprising five satellites and three ground stations having S- and X-band antennas. To demonstrate the performance of the proposed method, simulations are performed via three operation modes: general, commercial, and tactical.

  8. Multiview Discriminative Geometry Preserving Projection for Image Classification

    Directory of Open Access Journals (Sweden)

    Ziqiang Wang

    2014-01-01

    Full Text Available In many image classification applications, it is common to extract multiple visual features from different views to describe an image. Since different visual features have their own specific statistical properties and discriminative powers for image classification, the conventional solution for multiple view data is to concatenate these feature vectors as a new feature vector. However, this simple concatenation strategy not only ignores the complementary nature of different views, but also ends up with “curse of dimensionality.” To address this problem, we propose a novel multiview subspace learning algorithm in this paper, named multiview discriminative geometry preserving projection (MDGPP for feature extraction and classification. MDGPP can not only preserve the intraclass geometry and interclass discrimination information under a single view, but also explore the complementary property of different views to obtain a low-dimensional optimal consensus embedding by using an alternating-optimization-based iterative algorithm. Experimental results on face recognition and facial expression recognition demonstrate the effectiveness of the proposed algorithm.

  9. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  10. Land use classification from Sentinel-2 imagery

    OpenAIRE

    Borràs, J.; Delegido, J.; Pezzola, A.; Pereira, M.; Morassi, G.; Camps-Valls, G.

    2017-01-01

    [EN] Sentinel-2 (S2), a new ESA satellite for Earth observation, accounts with 13 bands which provide high-quality radiometric images with an excellent spatial resolution (10 and 20 m) ideal for classification purposes. In this paper, two objectives have been addressed: to determine the best classification method for S2, and to quantify its improve-ment with respect to the SPOT operational mission. To do so, four classifiers (LDA, RF, Decision Trees, K-NN) have been selected and applied to tw...

  11. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    Science.gov (United States)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  12. Low Dimensional Representation of Fisher Vectors for Microscopy Image Classification.

    Science.gov (United States)

    Song, Yang; Li, Qing; Huang, Heng; Feng, Dagan; Chen, Mei; Cai, Weidong

    2017-08-01

    Microscopy image classification is important in various biomedical applications, such as cancer subtype identification, and protein localization for high content screening. To achieve automated and effective microscopy image classification, the representative and discriminative capability of image feature descriptors is essential. To this end, in this paper, we propose a new feature representation algorithm to facilitate automated microscopy image classification. In particular, we incorporate Fisher vector (FV) encoding with multiple types of local features that are handcrafted or learned, and we design a separation-guided dimension reduction method to reduce the descriptor dimension while increasing its discriminative capability. Our method is evaluated on four publicly available microscopy image data sets of different imaging types and applications, including the UCSB breast cancer data set, MICCAI 2015 CBTC challenge data set, and IICBU malignant lymphoma, and RNAi data sets. Our experimental results demonstrate the advantage of the proposed low-dimensional FV representation, showing consistent performance improvement over the existing state of the art and the commonly used dimension reduction techniques.

  13. Radiomic features analysis in computed tomography images of lung nodule classification.

    Directory of Open Access Journals (Sweden)

    Chia-Hung Chen

    Full Text Available Radiomics, which extract large amount of quantification image features from diagnostic medical images had been widely used for prognostication, treatment response prediction and cancer detection. The treatment options for lung nodules depend on their diagnosis, benign or malignant. Conventionally, lung nodule diagnosis is based on invasive biopsy. Recently, radiomics features, a non-invasive method based on clinical images, have shown high potential in lesion classification, treatment outcome prediction.Lung nodule classification using radiomics based on Computed Tomography (CT image data was investigated and a 4-feature signature was introduced for lung nodule classification. Retrospectively, 72 patients with 75 pulmonary nodules were collected. Radiomics feature extraction was performed on non-enhanced CT images with contours which were delineated by an experienced radiation oncologist.Among the 750 image features in each case, 76 features were found to have significant differences between benign and malignant lesions. A radiomics signature was composed of the best 4 features which included Laws_LSL_min, Laws_SLL_energy, Laws_SSL_skewness and Laws_EEL_uniformity. The accuracy using the signature in benign or malignant classification was 84% with the sensitivity of 92.85% and the specificity of 72.73%.The classification signature based on radiomics features demonstrated very good accuracy and high potential in clinical application.

  14. Time Series of Images to Improve Tree Species Classification

    Science.gov (United States)

    Miyoshi, G. T.; Imai, N. N.; de Moraes, M. V. A.; Tommaselli, A. M. G.; Näsi, R.

    2017-10-01

    Tree species classification provides valuable information to forest monitoring and management. The high floristic variation of the tree species appears as a challenging issue in the tree species classification because the vegetation characteristics changes according to the season. To help to monitor this complex environment, the imaging spectroscopy has been largely applied since the development of miniaturized sensors attached to Unmanned Aerial Vehicles (UAV). Considering the seasonal changes in forests and the higher spectral and spatial resolution acquired with sensors attached to UAV, we present the use of time series of images to classify four tree species. The study area is an Atlantic Forest area located in the western part of São Paulo State. Images were acquired in August 2015 and August 2016, generating three data sets of images: only with the image spectra of 2015; only with the image spectra of 2016; with the layer stacking of images from 2015 and 2016. Four tree species were classified using Spectral angle mapper (SAM), Spectral information divergence (SID) and Random Forest (RF). The results showed that SAM and SID caused an overfitting of the data whereas RF showed better results and the use of the layer stacking improved the classification achieving a kappa coefficient of 18.26 %.

  15. PC image processing

    International Nuclear Information System (INIS)

    Hwa, Mok Jin Il; Am, Ha Jeng Ung

    1995-04-01

    This book starts summary of digital image processing and personal computer, and classification of personal computer image processing system, digital image processing, development of personal computer and image processing, image processing system, basic method of image processing such as color image processing and video processing, software and interface, computer graphics, video image and video processing application cases on image processing like satellite image processing, color transformation of image processing in high speed and portrait work system.

  16. MULTI-TEMPORAL CLASSIFICATION AND CHANGE DETECTION USING UAV IMAGES

    Directory of Open Access Journals (Sweden)

    S. Makuti

    2018-05-01

    Full Text Available In this paper different methodologies for the classification and change detection of UAV image blocks are explored. UAV is not only the cheapest platform for image acquisition but it is also the easiest platform to operate in repeated data collections over a changing area like a building construction site. Two change detection techniques have been evaluated in this study: the pre-classification and the post-classification algorithms. These methods are based on three main steps: feature extraction, classification and change detection. A set of state of the art features have been used in the tests: colour features (HSV, textural features (GLCM and 3D geometric features. For classification purposes Conditional Random Field (CRF has been used: the unary potential was determined using the Random Forest algorithm while the pairwise potential was defined by the fully connected CRF. In the performed tests, different feature configurations and settings have been considered to assess the performance of these methods in such challenging task. Experimental results showed that the post-classification approach outperforms the pre-classification change detection method. This was analysed using the overall accuracy, where by post classification have an accuracy of up to 62.6 % and the pre classification change detection have an accuracy of 46.5 %. These results represent a first useful indication for future works and developments.

  17. a Hyperspectral Image Classification Method Using Isomap and Rvm

    Science.gov (United States)

    Chang, H.; Wang, T.; Fang, H.; Su, Y.

    2018-04-01

    Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.

  18. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  19. Classification of Hyperspectral Images Using Kernel Fully Constrained Least Squares

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-11-01

    Full Text Available As a widely used classifier, sparse representation classification (SRC has shown its good performance for hyperspectral image classification. Recent works have highlighted that it is the collaborative representation mechanism under SRC that makes SRC a highly effective technique for classification purposes. If the dimensionality and the discrimination capacity of a test pixel is high, other norms (e.g., ℓ 2 -norm can be used to regularize the coding coefficients, except for the sparsity ℓ 1 -norm. In this paper, we show that in the kernel space the nonnegative constraint can also play the same role, and thus suggest the investigation of kernel fully constrained least squares (KFCLS for hyperspectral image classification. Furthermore, in order to improve the classification performance of KFCLS by incorporating spatial-spectral information, we investigate two kinds of spatial-spectral methods using two regularization strategies: (1 the coefficient-level regularization strategy, and (2 the class-level regularization strategy. Experimental results conducted on four real hyperspectral images demonstrate the effectiveness of the proposed KFCLS, and show which way to incorporate spatial-spectral information efficiently in the regularization framework.

  20. Biomass estimation with high resolution satellite images: A case study of Quercus rotundifolia

    Science.gov (United States)

    Sousa, Adélia M. O.; Gonçalves, Ana Cristina; Mesquita, Paulo; Marques da Silva, José R.

    2015-03-01

    Forest biomass has had a growing importance in the world economy as a global strategic reserve, due to applications in bioenergy, bioproduct development and issues related to reducing greenhouse gas emissions. Current techniques used for forest inventory are usually time consuming and expensive. Thus, there is an urgent need to develop reliable, low cost methods that can be used for forest biomass estimation and monitoring. This study uses new techniques to process high spatial resolution satellite images (0.70 m) in order to assess and monitor forest biomass. Multi-resolution segmentation method and object oriented classification are used to obtain the area of tree canopy horizontal projection for Quercus rotundifolia. Forest inventory allows for calculation of tree and canopy horizontal projection and biomass, the latter with allometric functions. The two data sets are used to develop linear functions to assess above ground biomass, with crown horizontal projection as an independent variable. The functions for the cumulative values, both for inventory and satellite data, for a prediction error equal or smaller than the Portuguese national forest inventory (7%), correspond to stand areas of 0.5 ha, which include most of the Q.rotundifolia stands.

  1. Collaborative classification of hyperspectral and visible images with convolutional neural network

    Science.gov (United States)

    Zhang, Mengmeng; Li, Wei; Du, Qian

    2017-10-01

    Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.

  2. The SUMO Ship Detector Algorithm for Satellite Radar Images

    Directory of Open Access Journals (Sweden)

    Harm Greidanus

    2017-03-01

    Full Text Available Search for Unidentified Maritime Objects (SUMO is an algorithm for ship detection in satellite Synthetic Aperture Radar (SAR images. It has been developed over the course of more than 15 years, using a large amount of SAR images from almost all available SAR satellites operating in L-, C- and X-band. As validated by benchmark tests, it performs very well on a wide range of SAR image modes (from Spotlight to ScanSAR and resolutions (from 1–100 m and for all types and sizes of ships, within the physical limits imposed by the radar imaging. This paper describes, in detail, the algorithmic approach in all of the steps of the ship detection: land masking, clutter estimation, detection thresholding, target clustering, ship attribute estimation and false alarm suppression. SUMO is a pixel-based CFAR (Constant False Alarm Rate detector for multi-look radar images. It assumes a K distribution for the sea clutter, corrected however for deviations of the actual sea clutter from this distribution, implementing a fast and robust method for the clutter background estimation. The clustering of detected pixels into targets (ships uses several thresholds to deal with the typically irregular distribution of the radar backscatter over a ship. In a multi-polarization image, the different channels are fused. Azimuth ambiguities, a common source of false alarms in ship detection, are removed. A reliability indicator is computed for each target. In post-processing, using the results of a series of images, additional false alarms from recurrent (fixed targets including range ambiguities are also removed. SUMO can run in semi-automatic mode, where an operator can verify each detected target. It can also run in fully automatic mode, where batches of over 10,000 images have successfully been processed in less than two hours. The number of satellite SAR systems keeps increasing, as does their application to maritime surveillance. The open data policy of the EU

  3. Contribution of non-negative matrix factorization to the classification of remote sensing images

    Science.gov (United States)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  4. Biomedical imaging modality classification using combined visual features and textual terms.

    Science.gov (United States)

    Han, Xian-Hua; Chen, Yen-Wei

    2011-01-01

    We describe an approach for the automatic modality classification in medical image retrieval task of the 2010 CLEF cross-language image retrieval campaign (ImageCLEF). This paper is focused on the process of feature extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  5. AN APPROACH FOR STITCHING SATELLITE IMAGES IN A BIGDATA MAPREDUCE FRAMEWORK

    Directory of Open Access Journals (Sweden)

    H. Sarı

    2017-11-01

    Full Text Available In this study we present a two-step map/reduce framework to stitch satellite mosaic images. The proposed system enable recognition and extraction of objects whose parts falling in separate satellite mosaic images. However this is a time and resource consuming process. The major aim of the study is improving the performance of the image stitching processes by utilizing big data framework. To realize this, we first convert the images into bitmaps (first mapper and then String formats in the forms of 255s and 0s (second mapper, and finally, find the best possible matching position of the images by a reduce function.

  6. An Approach for Stitching Satellite Images in a Bigdata Mapreduce Framework

    Science.gov (United States)

    Sarı, H.; Eken, S.; Sayar, A.

    2017-11-01

    In this study we present a two-step map/reduce framework to stitch satellite mosaic images. The proposed system enable recognition and extraction of objects whose parts falling in separate satellite mosaic images. However this is a time and resource consuming process. The major aim of the study is improving the performance of the image stitching processes by utilizing big data framework. To realize this, we first convert the images into bitmaps (first mapper) and then String formats in the forms of 255s and 0s (second mapper), and finally, find the best possible matching position of the images by a reduce function.

  7. Detecting aircrafts from satellite images using saliency and conical ...

    Indian Academy of Sciences (India)

    Samik Banerjee

    automatically detect all kinds of interesting targets in satellite images. .... which is used for text and image categorization, has been also introduced for object ...... 3.4 GHz processor, 32 GB RAM and Windows 7 (64 bit). Operating System. 6.

  8. Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification

    Science.gov (United States)

    Guo, Yiqing; Jia, Xiuping; Paull, David

    2018-06-01

    The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.

  9. Design of an Image Motion Compenstaion (IMC Algorithm for Image Registration of the Communication, Ocean, Meteorolotical Satellite (COMS-1

    Directory of Open Access Journals (Sweden)

    Taek Seo Jung

    2006-03-01

    Full Text Available This paper presents an Image Motion Compensation (IMC algorithm for the Korea's Communication, Ocean, and Meteorological Satellite (COMS-1. An IMC algorithm is a priority component of image registration in Image Navigation and Registration (INR system to locate and register radiometric image data. Due to various perturbations, a satellite has orbit and attitude errors with respect to a reference motion. These errors cause depointing of the imager aiming direction, and in consequence cause image distortions. To correct the depointing of the imager aiming direction, a compensation algorithm is designed by adapting different equations from those used for the GOES satellites. The capability of the algorithm is compared with that of existing algorithm applied to the GOES's INR system. The algorithm developed in this paper improves pointing accuracy by 40%, and efficiently compensates the depointings of the imager aiming direction.

  10. The Application of the Technology of 3D Satellite Cloud Imaging in Virtual Reality Simulation

    Directory of Open Access Journals (Sweden)

    Xiao-fang Xie

    2007-05-01

    Full Text Available Using satellite cloud images to simulate clouds is one of the new visual simulation technologies in Virtual Reality (VR. Taking the original data of satellite cloud images as the source, this paper depicts specifically the technology of 3D satellite cloud imaging through the transforming of coordinates and projection, creating a DEM (Digital Elevation Model of cloud imaging and 3D simulation. A Mercator projection was introduced to create a cloud image DEM, while solutions for geodetic problems were introduced to calculate distances, and the outer-trajectory science of rockets was introduced to obtain the elevation of clouds. For demonstration, we report on a computer program to simulate the 3D satellite cloud images.

  11. Geometric calibration of ERS satellite SAR images

    DEFF Research Database (Denmark)

    Mohr, Johan Jacob; Madsen, Søren Nørvang

    2001-01-01

    Geometric calibration of the European Remote Sensing (ERS) Satellite synthetic aperture radar (SAR) slant range images is important in relation to mapping areas without ground reference points and also in relation to automated processing. The relevant SAR system parameters are discussed...

  12. BIOCAT: a pattern recognition platform for customizable biological image classification and annotation.

    Science.gov (United States)

    Zhou, Jie; Lamichhane, Santosh; Sterne, Gabriella; Ye, Bing; Peng, Hanchuan

    2013-10-04

    Pattern recognition algorithms are useful in bioimage informatics applications such as quantifying cellular and subcellular objects, annotating gene expressions, and classifying phenotypes. To provide effective and efficient image classification and annotation for the ever-increasing microscopic images, it is desirable to have tools that can combine and compare various algorithms, and build customizable solution for different biological problems. However, current tools often offer a limited solution in generating user-friendly and extensible tools for annotating higher dimensional images that correspond to multiple complicated categories. We develop the BIOimage Classification and Annotation Tool (BIOCAT). It is able to apply pattern recognition algorithms to two- and three-dimensional biological image sets as well as regions of interest (ROIs) in individual images for automatic classification and annotation. We also propose a 3D anisotropic wavelet feature extractor for extracting textural features from 3D images with xy-z resolution disparity. The extractor is one of the about 20 built-in algorithms of feature extractors, selectors and classifiers in BIOCAT. The algorithms are modularized so that they can be "chained" in a customizable way to form adaptive solution for various problems, and the plugin-based extensibility gives the tool an open architecture to incorporate future algorithms. We have applied BIOCAT to classification and annotation of images and ROIs of different properties with applications in cell biology and neuroscience. BIOCAT provides a user-friendly, portable platform for pattern recognition based biological image classification of two- and three- dimensional images and ROIs. We show, via diverse case studies, that different algorithms and their combinations have different suitability for various problems. The customizability of BIOCAT is thus expected to be useful for providing effective and efficient solutions for a variety of biological

  13. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng

    2015-05-28

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.

  14. Dealing with missing data in remote sensing images within land and crop classification

    Science.gov (United States)

    Skakun, Sergii; Kussul, Nataliia; Basarab, Ruslan

    of non-missing data to the subspace vectors in the map. Restoration of the missing values is performed in the following way. The multi-temporal pixel values (with gaps) are put to the neural network. A neuron-winner (or a best matching unit, BMU) in the SOM is selected based on the distance metric (for example, Euclidian). It should be noted that missing values are omitted from metric estimation when selecting BMU. When the BMU is selected, missing values are substituted by corresponding components of the BMU values. The efficiency of the proposed approach was tested on a time-series of Landsat-8 images over the JECAM test site in Ukraine and Sich-2 images over Crimea (Sich-2 is Ukrainian remote sensing satellite acquiring images at 8m spatial resolution). Landsat-8 images were first converted to the TOA reflectance, and then were atmospherically corrected so each pixel value represents a surface reflectance in the range from 0 to 1. The error of reconstruction (error of quantization) on training data was: band-2: 0.015; band-3: 0.020; band-4: 0.026; band-5: 0.070; band-6: 0.060; band-7: 0.055. The reconstructed images were also used for crop classification using a multi-layer perceptron (MLP). Overall accuracy was 85.98% and Cohen's kappa was 0.83. References. 1. Skakun, S., Kussul, N., Shelestov, A. and Kussul, O. “Flood Hazard and Flood Risk Assessment Using a Time Series of Satellite Images: A Case Study in Namibia,” Risk Analysis, 2013, doi: 10.1111/risa.12156. 2. Gallego, F.J., Kussul, N., Skakun, S., Kravchenko, O., Shelestov, A., Kussul, O. “Efficiency assessment of using satellite data for crop area estimation in Ukraine,” International Journal of Applied Earth Observation and Geoinformation, vol. 29, pp. 22-30, 2014. 3. Roy D.P., Ju, J., Lewis, P., Schaaf, C., Gao, F., Hansen, M., and Lindquist, E., “Multi-temporal MODIS-Landsat data fusion for relative radiometric normalization, gap filling, and prediction of Landsat data,” Remote Sensing of

  15. Ice Sheet Change Detection by Satellite Image Differencing

    Science.gov (United States)

    Bindschadler, Robert A.; Scambos, Ted A.; Choi, Hyeungu; Haran, Terry M.

    2010-01-01

    Differencing of digital satellite image pairs highlights subtle changes in near-identical scenes of Earth surfaces. Using the mathematical relationships relevant to photoclinometry, we examine the effectiveness of this method for the study of localized ice sheet surface topography changes using numerical experiments. We then test these results by differencing images of several regions in West Antarctica, including some where changes have previously been identified in altimeter profiles. The technique works well with coregistered images having low noise, high radiometric sensitivity, and near-identical solar illumination geometry. Clouds and frosts detract from resolving surface features. The ETM(plus) sensor on Landsat-7, ALI sensor on EO-1, and MODIS sensor on the Aqua and Terra satellite platforms all have potential for detecting localized topographic changes such as shifting dunes, surface inflation and deflation features associated with sub-glacial lake fill-drain events, or grounding line changes. Availability and frequency of MODIS images favor this sensor for wide application, and using it, we demonstrate both qualitative identification of changes in topography and quantitative mapping of slope and elevation changes.

  16. Land Cover Classification via Multitemporal Spatial Data by Deep Recurrent Neural Networks

    Science.gov (United States)

    Ienco, Dino; Gaetano, Raffaele; Dupaquier, Claire; Maurel, Pierre

    2017-10-01

    Nowadays, modern earth observation programs produce huge volumes of satellite images time series (SITS) that can be useful to monitor geographical areas through time. How to efficiently analyze such kind of information is still an open question in the remote sensing field. Recently, deep learning methods proved suitable to deal with remote sensing data mainly for scene classification (i.e. Convolutional Neural Networks - CNNs - on single images) while only very few studies exist involving temporal deep learning approaches (i.e Recurrent Neural Networks - RNNs) to deal with remote sensing time series. In this letter we evaluate the ability of Recurrent Neural Networks, in particular the Long-Short Term Memory (LSTM) model, to perform land cover classification considering multi-temporal spatial data derived from a time series of satellite images. We carried out experiments on two different datasets considering both pixel-based and object-based classification. The obtained results show that Recurrent Neural Networks are competitive compared to state-of-the-art classifiers, and may outperform classical approaches in presence of low represented and/or highly mixed classes. We also show that using the alternative feature representation generated by LSTM can improve the performances of standard classifiers.

  17. Automated otolith image classification with multiple views: an evaluation on Sciaenidae.

    Science.gov (United States)

    Wong, J Y; Chu, C; Chong, V C; Dhillon, S K; Loh, K H

    2016-08-01

    Combined multiple 2D views (proximal, anterior and ventral aspects) of the sagittal otolith are proposed here as a method to capture shape information for fish classification. Classification performance of single view compared with combined 2D views show improved classification accuracy of the latter, for nine species of Sciaenidae. The effects of shape description methods (shape indices, Procrustes analysis and elliptical Fourier analysis) on classification performance were evaluated. Procrustes analysis and elliptical Fourier analysis perform better than shape indices when single view is considered, but all perform equally well with combined views. A generic content-based image retrieval (CBIR) system that ranks dissimilarity (Procrustes distance) of otolith images was built to search query images without the need for detailed information of side (left or right), aspect (proximal or distal) and direction (positive or negative) of the otolith. Methods for the development of this automated classification system are discussed. © 2016 The Fisheries Society of the British Isles.

  18. COMBINATION OF GENETIC ALGORITHM AND DEMPSTER-SHAFER THEORY OF EVIDENCE FOR LAND COVER CLASSIFICATION USING INTEGRATION OF SAR AND OPTICAL SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    H. T. Chu

    2012-07-01

    Full Text Available The integration of different kinds of remotely sensed data, in particular Synthetic Aperture Radar (SAR and optical satellite imagery, is considered a promising approach for land cover classification because of the complimentary properties of each data source. However, the challenges are: how to fully exploit the capabilities of these multiple data sources, which combined datasets should be used and which data processing and classification techniques are most appropriate in order to achieve the best results. In this paper an approach, in which synergistic use of a feature selection (FS methods with Genetic Algorithm (GA and multiple classifiers combination based on Dempster-Shafer Theory of Evidence, is proposed and evaluated for classifying land cover features in New South Wales, Australia. Multi-date SAR data, including ALOS/PALSAR, ENVISAT/ASAR and optical (Landsat 5 TM+ images, were used for this study. Textural information were also derived and integrated with the original images. Various combined datasets were generated for classification. Three classifiers, namely Artificial Neural Network (ANN, Support Vector Machines (SVMs and Self-Organizing Map (SOM were employed. Firstly, feature selection using GA was applied for each classifier and dataset to determine the optimal input features and parameters. Then the results of three classifiers on particular datasets were combined using the Dempster-Shafer theory of Evidence. Results of this study demonstrate the advantages of the proposed method for land cover mapping using complex datasets. It is revealed that the use of GA in conjunction with the Dempster-Shafer Theory of Evidence can significantly improve the classification accuracy. Furthermore, integration of SAR and optical data often outperform single-type datasets.

  19. A classification model of Hyperion image base on SAM combined decision tree

    Science.gov (United States)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model

  20. Classification of radiolarian images with hand-crafted and deep features

    Science.gov (United States)

    Keçeli, Ali Seydi; Kaya, Aydın; Keçeli, Seda Uzunçimen

    2017-12-01

    Radiolarians are planktonic protozoa and are important biostratigraphic and paleoenvironmental indicators for paleogeographic reconstructions. Radiolarian paleontology still remains as a low cost and the one of the most convenient way to obtain dating of deep ocean sediments. Traditional methods for identifying radiolarians are time-consuming and cannot scale to the granularity or scope necessary for large-scale studies. Automated image classification will allow making these analyses promptly. In this study, a method for automatic radiolarian image classification is proposed on Scanning Electron Microscope (SEM) images of radiolarians to ease species identification of fossilized radiolarians. The proposed method uses both hand-crafted features like invariant moments, wavelet moments, Gabor features, basic morphological features and deep features obtained from a pre-trained Convolutional Neural Network (CNN). Feature selection is applied over deep features to reduce high dimensionality. Classification outcomes are analyzed to compare hand-crafted features, deep features, and their combinations. Results show that the deep features obtained from a pre-trained CNN are more discriminative comparing to hand-crafted ones. Additionally, feature selection utilizes to the computational cost of classification algorithms and have no negative effect on classification accuracy.

  1. Landsat TM and ETM+ 2002-2003 Kansas Satellite Image Database (KSID)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Satellite Image Database (KSID):2002-2003 consists of image data gathered by three sensors. The first image data are terrain-corrected, precision...

  2. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    Directory of Open Access Journals (Sweden)

    Ying Cai

    2012-09-01

    Full Text Available In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT, the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3% and overall (92.0%–93.1% accuracies. Our

  3. Hyperspectral Image Enhancement and Mixture Deep-Learning Classification of Corneal Epithelium Injuries.

    Science.gov (United States)

    Noor, Siti Salwa Md; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-11-16

    In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability.

  4. Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks

    Science.gov (United States)

    Xu, Xin; Gui, Rong; Pu, Fangling

    2018-01-01

    Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods. PMID:29510499

  5. Classification of objects on hyperspectral images — further developments

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey V.; Williams, Paul

    Classification of objects (such as tablets, cereals, fruits, etc.) is one of the very important applications of hyperspectral imaging and image analysis. Quite often, a hyperspectral image is represented and analyzed just as a bunch of spectra without taking into account spatial information about...... the pixels, which makes classification objects inefficient. Recently, several methods, which combine spectral and spatial information, has been also developed and this approach becomes more and more wide-spread. The methods use local rank, topology, spectral features calculated for separate objects and other...... spatial characteristics. In this work we would like to show several improvements to the classification method, which utilizes spectral features calculated for individual objects [1]. The features are based (in general) on descriptors of spatial patterns of individual object’s pixels in a common principal...

  6. RELATIVE ORIENTATION AND MODIFIED PIECEWISE EPIPOLAR RESAMPLING FOR HIGH RESOLUTION SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    K. Gong

    2017-05-01

    Full Text Available High resolution, optical satellite sensors are boosted to a new era in the last few years, because satellite stereo images at half meter or even 30cm resolution are available. Nowadays, high resolution satellite image data have been commonly used for Digital Surface Model (DSM generation and 3D reconstruction. It is common that the Rational Polynomial Coefficients (RPCs provided by the vendors have rough precision and there is no ground control information available to refine the RPCs. Therefore, we present two relative orientation methods by using corresponding image points only: the first method will use quasi ground control information, which is generated from the corresponding points and rough RPCs, for the bias-compensation model; the second method will estimate the relative pointing errors on the matching image and remove this error by an affine model. Both methods do not need ground control information and are applied for the entire image. To get very dense point clouds, the Semi-Global Matching (SGM method is an efficient tool. However, before accomplishing the matching process the epipolar constraints are required. In most conditions, satellite images have very large dimensions, contrary to the epipolar geometry generation and image resampling, which is usually carried out in small tiles. This paper also presents a modified piecewise epipolar resampling method for the entire image without tiling. The quality of the proposed relative orientation and epipolar resampling method are evaluated, and finally sub-pixel accuracy has been achieved in our work.

  7. Automatic classification and detection of clinically relevant images for diabetic retinopathy

    Science.gov (United States)

    Xu, Xinyu; Li, Baoxin

    2008-03-01

    We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.

  8. Mapping Impervious Surface Expansion using Medium-resolution Satellite Image Time Series: A Case Study in the Yangtze River Delta, China

    Science.gov (United States)

    Gao, Feng; DeColstoun, Eric Brown; Ma, Ronghua; Weng, Qihao; Masek, Jeffrey G.; Chen, Jin; Pan, Yaozhong; Song, Conghe

    2012-01-01

    Cities have been expanding rapidly worldwide, especially over the past few decades. Mapping the dynamic expansion of impervious surface in both space and time is essential for an improved understanding of the urbanization process, land-cover and land-use change, and their impacts on the environment. Landsat and other medium-resolution satellites provide the necessary spatial details and temporal frequency for mapping impervious surface expansion over the past four decades. Since the US Geological Survey opened the historical record of the Landsat image archive for free access in 2008, the decades-old bottleneck of data limitation has gone. Remote-sensing scientists are now rich with data, and the challenge is how to make best use of this precious resource. In this article, we develop an efficient algorithm to map the continuous expansion of impervious surface using a time series of four decades of medium-resolution satellite images. The algorithm is based on a supervised classification of the time-series image stack using a decision tree. Each imerpervious class represents urbanization starting in a different image. The algorithm also allows us to remove inconsistent training samples because impervious expansion is not reversible during the study period. The objective is to extract a time series of complete and consistent impervious surface maps from a corresponding times series of images collected from multiple sensors, and with a minimal amount of image preprocessing effort. The approach was tested in the lower Yangtze River Delta region, one of the fastest urban growth areas in China. Results from nearly four decades of medium-resolution satellite data from the Landsat Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper plus (ETM+) and China-Brazil Earth Resources Satellite (CBERS) show a consistent urbanization process that is consistent with economic development plans and policies. The time-series impervious spatial extent maps derived

  9. Image Fusion-Based Land Cover Change Detection Using Multi-Temporal High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Biao Wang

    2017-08-01

    Full Text Available Change detection is usually treated as a problem of explicitly detecting land cover transitions in satellite images obtained at different times, and helps with emergency response and government management. This study presents an unsupervised change detection method based on the image fusion of multi-temporal images. The main objective of this study is to improve the accuracy of unsupervised change detection from high-resolution multi-temporal images. Our method effectively reduces change detection errors, since spatial displacement and spectral differences between multi-temporal images are evaluated. To this end, a total of four cross-fused images are generated with multi-temporal images, and the iteratively reweighted multivariate alteration detection (IR-MAD method—a measure for the spectral distortion of change information—is applied to the fused images. In this experiment, the land cover change maps were extracted using multi-temporal IKONOS-2, WorldView-3, and GF-1 satellite images. The effectiveness of the proposed method compared with other unsupervised change detection methods is demonstrated through experimentation. The proposed method achieved an overall accuracy of 80.51% and 97.87% for cases 1 and 2, respectively. Moreover, the proposed method performed better when differentiating the water area from the vegetation area compared to the existing change detection methods. Although the water area beneath moderate and sparse vegetation canopy was captured, vegetation cover and paved regions of the water body were the main sources of omission error, and commission errors occurred primarily in pixels of mixed land use and along the water body edge. Nevertheless, the proposed method, in conjunction with high-resolution satellite imagery, offers a robust and flexible approach to land cover change mapping that requires no ancillary data for rapid implementation.

  10. A minimum spanning forest based classification method for dedicated breast CT images

    International Nuclear Information System (INIS)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei

    2015-01-01

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting model used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging

  11. Shoreline change assessment using multi-temporal satellite images: a case study of Lake Sapanca, NW Turkey.

    Science.gov (United States)

    Duru, Umit

    2017-08-01

    The research summarized here determines historical shoreline changes along Lake Sapanca by using Remote Sensing (RS) and Geographical Information Systems (GIS). Six multi-temporal satellite images of Landsat Multispectral Scanner (L1-5 MMS), Enhanced Thematic Mapper Plus (L7 ETM+), and Operational Land Imager Sensors (L8 OLI), covering the period between 17 June 1975 and 15 July 2016, were used to monitor shoreline positions and estimate change rates along the coastal zone. After pre-possessing routines, the Normalized Difference Water Index (NDWI), Modified Normalized Difference Water Index (MNDWI), and supervised classification techniques were utilized to extract six different shorelines. Digital Shoreline Analysis System (DSAS), a toolbox that enables transect-based computations of shoreline displacement, was used to compute historical shoreline change rates. The average rate of shoreline change for the entire cost was 2.7 m/year of progradation with an uncertainty of 0.2 m/year. While the great part of the lake shoreline remained stable, the study concluded that the easterly and westerly coasts and deltaic coasts are more vulnerable to shoreline displacements over the last four decades. The study also reveals that anthropogenic activities, more specifically over extraction of freshwater from the lake, cyclic variation in rainfall, and deposition of sediment transported by the surrounding creeks dominantly control spatiotemporal shoreline changes in the region. Monitoring shoreline changes using multi-temporal satellite images is a significant component for the coastal decision-making and management.

  12. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad

    2016-12-09

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  13. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad; Shafait, Faisal; Ghanem, Bernard; Mian, Ajmal

    2016-01-01

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  14. Segmentation of Clinical Endoscopic Images Based on the Classification of Topological Vector Features

    Directory of Open Access Journals (Sweden)

    O. A. Dunaeva

    2013-01-01

    Full Text Available In this work, we describe a prototype of an automatic segmentation system and annotation of endoscopy images. The used algorithm is based on the classification of vectors of the topological features of the original image. We use the image processing scheme which includes image preprocessing, calculation of vector descriptors defined for every point of the source image and the subsequent classification of descriptors. Image preprocessing includes finding and selecting artifacts and equalizating the image brightness. In this work, we give the detailed algorithm of the construction of topological descriptors and the classifier creating procedure based on mutual sharing the AdaBoost scheme and a naive Bayes classifier. In the final section, we show the results of the classification of real endoscopic images.

  15. Accuracy assessment of topographic mapping using UAV image integrated with satellite images

    International Nuclear Information System (INIS)

    Azmi, S M; Ahmad, Baharin; Ahmad, Anuar

    2014-01-01

    Unmanned Aerial Vehicle or UAV is extensively applied in various fields such as military applications, archaeology, agriculture and scientific research. This study focuses on topographic mapping and map updating. UAV is one of the alternative ways to ease the process of acquiring data with lower operating costs, low manufacturing and operational costs, plus it is easy to operate. Furthermore, UAV images will be integrated with QuickBird images that are used as base maps. The objective of this study is to make accuracy assessment and comparison between topographic mapping using UAV images integrated with aerial photograph and satellite image. The main purpose of using UAV image is as a replacement for cloud covered area which normally exists in aerial photograph and satellite image, and for updating topographic map. Meanwhile, spatial resolution, pixel size, scale, geometric accuracy and correction, image quality and information contents are important requirements needed for the generation of topographic map using these kinds of data. In this study, ground control points (GCPs) and check points (CPs) were established using real time kinematic Global Positioning System (RTK-GPS) technique. There are two types of analysis that are carried out in this study which are quantitative and qualitative assessments. Quantitative assessment is carried out by calculating root mean square error (RMSE). The outputs of this study include topographic map and orthophoto. From this study, the accuracy of UAV image is ± 0.460 m. As conclusion, UAV image has the potential to be used for updating of topographic maps

  16. Images of war: using satellite images for human rights monitoring in Turkish Kurdistan.

    Science.gov (United States)

    de Vos, Hugo; Jongerden, Joost; van Etten, Jacob

    2008-09-01

    In areas of war and armed conflict it is difficult to get trustworthy and coherent information. Civil society and human rights groups often face problems of dealing with fragmented witness reports, disinformation of war propaganda, and difficult direct access to these areas. Turkish Kurdistan was used as a case study of armed conflict to evaluate the potential use of satellite images for verification of witness reports collected by human rights groups. The Turkish army was reported to be burning forests, fields and villages as a strategy in the conflict against guerrilla uprising. This paper concludes that satellite images are useful to validate witness reports of forest fires. Even though the use of this technology for human rights groups will depend on some feasibility factors such as prices, access and expertise, the images proved to be key for analysis of spatial aspects of conflict and valuable for reconstructing a more trustworthy picture.

  17. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  18. Tile-Based Semisupervised Classification of Large-Scale VHR Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Haikel Alhichri

    2018-01-01

    Full Text Available This paper deals with the problem of the classification of large-scale very high-resolution (VHR remote sensing (RS images in a semisupervised scenario, where we have a limited training set (less than ten training samples per class. Typical pixel-based classification methods are unfeasible for large-scale VHR images. Thus, as a practical and efficient solution, we propose to subdivide the large image into a grid of tiles and then classify the tiles instead of classifying pixels. Our proposed method uses the power of a pretrained convolutional neural network (CNN to first extract descriptive features from each tile. Next, a neural network classifier (composed of 2 fully connected layers is trained in a semisupervised fashion and used to classify all remaining tiles in the image. This basically presents a coarse classification of the image, which is sufficient for many RS application. The second contribution deals with the employment of the semisupervised learning to improve the classification accuracy. We present a novel semisupervised approach which exploits both the spectral and spatial relationships embedded in the remaining unlabelled tiles. In particular, we embed a spectral graph Laplacian in the hidden layer of the neural network. In addition, we apply regularization of the output labels using a spatial graph Laplacian and the random Walker algorithm. Experimental results obtained by testing the method on two large-scale images acquired by the IKONOS2 sensor reveal promising capabilities of this method in terms of classification accuracy even with less than ten training samples per class.

  19. Remote classification from an airborne camera using image super-resolution.

    Science.gov (United States)

    Woods, Matthew; Katsaggelos, Aggelos

    2017-02-01

    The image processing technique known as super-resolution (SR), which attempts to increase the effective pixel sampling density of a digital imager, has gained rapid popularity over the last decade. The majority of literature focuses on its ability to provide results that are visually pleasing to a human observer. In this paper, we instead examine the ability of SR to improve the resolution-critical capability of an imaging system to perform a classification task from a remote location, specifically from an airborne camera. In order to focus the scope of the study, we address and quantify results for the narrow case of text classification. However, we expect the results generalize to a large set of related, remote classification tasks. We generate theoretical results through simulation, which are corroborated by experiments with a camera mounted on a DJI Phantom 3 quadcopter.

  20. Integrating image processing and classification technology into automated polarizing film defect inspection

    Science.gov (United States)

    Kuo, Chung-Feng Jeffrey; Lai, Chun-Yu; Kao, Chih-Hsiang; Chiu, Chin-Hsun

    2018-05-01

    In order to improve the current manual inspection and classification process for polarizing film on production lines, this study proposes a high precision automated inspection and classification system for polarizing film, which is used for recognition and classification of four common defects: dent, foreign material, bright spot, and scratch. First, the median filter is used to remove the impulse noise in the defect image of polarizing film. The random noise in the background is smoothed by the improved anisotropic diffusion, while the edge detail of the defect region is sharpened. Next, the defect image is transformed by Fourier transform to the frequency domain, combined with a Butterworth high pass filter to sharpen the edge detail of the defect region, and brought back by inverse Fourier transform to the spatial domain to complete the image enhancement process. For image segmentation, the edge of the defect region is found by Canny edge detector, and then the complete defect region is obtained by two-stage morphology processing. For defect classification, the feature values, including maximum gray level, eccentricity, the contrast, and homogeneity of gray level co-occurrence matrix (GLCM) extracted from the images, are used as the input of the radial basis function neural network (RBFNN) and back-propagation neural network (BPNN) classifier, 96 defect images are then used as training samples, and 84 defect images are used as testing samples to validate the classification effect. The result shows that the classification accuracy by using RBFNN is 98.9%. Thus, our proposed system can be used by manufacturing companies for a higher yield rate and lower cost. The processing time of one single image is 2.57 seconds, thus meeting the practical application requirement of an industrial production line.

  1. Improving Eastern Bluebird nest box performance using computer analysis of satellite images

    Directory of Open Access Journals (Sweden)

    Sarah Svatora

    2012-06-01

    Full Text Available Bird conservationists have been introducing man-made boxes in an effort to increase the bluebird population. In this study we use computer analysis of satellite images to show that the performance of the boxes used by Eastern Bluebirds (Sialia sialis in Michigan can be improved by about 48%. The analysis is based on a strongcorrelation found between the edge directionality measured in the satellite image of the area around the box, and the preferences of the birds when selecting their nesting site. The method is based on satellite images taken from Google Earth, and can be used by conservationists to select a box placement strategy that will optimize the efficacy of the boxes deployed in a given area.

  2. Investigating the Capability of IRS-P6-LISS IV Satellite Image for Pistachio Forests Density Mapping (case Study: Northeast of Iran)

    Science.gov (United States)

    Hoseini, F.; Darvishsefat, A. A.; Zargham, N.

    2012-07-01

    In order to investigate the capability of satellite images for Pistachio forests density mapping, IRS-P6-LISS IV data were analyzed in an area of 500 ha in Iran. After geometric correction, suitable training areas were determined based on fieldwork. Suitable spectral transformations like NDVI, PVI and PCA were performed. A ground truth map included of 34 plots (each plot 1 ha) were prepared. Hard and soft supervised classifications were performed with 5 density classes (0-5%, 5-10%, 10-15%, 15-20% and > 20%). Because of low separability of classes, some classes were merged and classifications were repeated with 3 classes. Finally, the highest overall accuracy and kappa coefficient of 70% and 0.44, respectively, were obtained with three classes (0-5%, 5-20%, and > 20%) by fuzzy classifier. Considering the low kappa value obtained, it could be concluded that the result of the classification was not desirable. Therefore, this approach is not appropriate for operational mapping of these valuable Pistachio forests.

  3. Applying Support Vector Machine in classifying satellite images for the assessment of urban sprawl

    Science.gov (United States)

    murgante, Beniamino; Nolè, Gabriele; Lasaponara, Rosa; Lanorte, Antonio; Calamita, Giuseppe

    2013-04-01

    In last decades the spreading of new buildings, road infrastructures and a scattered proliferation of houses in zones outside urban areas, produced a countryside urbanization with no rules, consuming soils and impoverishing the landscape. Such a phenomenon generated a huge environmental impact, diseconomies and a decrease in life quality. This study analyzes processes concerning land use change, paying particular attention to urban sprawl phenomenon. The application is based on the integration of Geographic Information Systems and Remote Sensing adopting open source technologies. The objective is to understand size distribution and dynamic expansion of urban areas in order to define a methodology useful to both identify and monitor the phenomenon. In order to classify "urban" pixels, over time monitoring of settlements spread, understanding trends of artificial territories, classifications of satellite images at different dates have been realized. In order to obtain these classifications, supervised classification algorithms have been adopted. More particularly, Support Vector Machine (SVM) learning algorithm has been applied to multispectral remote data. One of the more interesting features in SVM is the possibility to obtain good results also adopting few classification pixels of training areas. SVM has several interesting features, such as the capacity to obtain good results also adopting few classification pixels of training areas, a high possibility of configuration parameters and the ability to discriminate pixels with similar spectral responses. Multi-temporal ASTER satellite data at medium resolution have been adopted because are very suitable in evaluating such phenomena. The application is based on the integration of Geographic Information Systems and Remote Sensing technologies by means of open source software. Tools adopted in managing and processing data are GRASS GIS, Quantum GIS and R statistical project. The area of interest is located south of Bari

  4. Auto Mission Planning System Design for Imaging Satellites and Its Applications in Environmental Field

    Directory of Open Access Journals (Sweden)

    He Yongming

    2016-10-01

    Full Text Available Satellite hardware has reached a level of development that enables imaging satellites to realize applications in the area of meteorology and environmental monitoring. As the requirements in terms of feasibility and the actual profit achieved by satellite applications increase, we need to comprehensively consider the actual status, constraints, unpredictable information, and complicated requirements. The management of this complex information and the allocation of satellite resources to realize image acquisition have become essential for enhancing the efficiency of satellite instrumentation. In view of this, we designed a satellite auto mission planning system, which includes two sub-systems: the imaging satellite itself and the ground base, and these systems would then collaborate to process complicated missions: the satellite mainly focuses on mission planning and functions according to actual parameters, whereas the ground base provides auxiliary information, management, and control. Based on the requirements analysis, we have devised the application scenarios, main module, and key techniques. Comparison of the simulation results of the system, confirmed the feasibility and optimization efficiency of the system framework, which also stimulates new thinking for the method of monitoring environment and design of mission planning systems.

  5. Citizen science land cover classification based on ground and satellite imagery: Case study Day River in Vietnam

    Science.gov (United States)

    Nguyen, Son Tung; Minkman, Ellen; Rutten, Martine

    2016-04-01

    Citizen science is being increasingly used in the context of environmental research, thus there are needs to evaluate cognitive ability of humans in classifying environmental features. With the focus on land cover, this study explores the extent to which citizen science can be applied in sensing and measuring the environment that contribute to the creation and validation of land cover data. The Day Basin in Vietnam was selected to be the study area. Different methods to examine humans' ability to classify land cover were implemented using different information sources: ground based photos - satellite images - field observation and investigation. Most of the participants were solicited from local people and/or volunteers. Results show that across methods and sources of information, there are similar patterns of agreement and disagreement on land cover classes among participants. Understanding these patterns is critical to create a solid basis for implementing human sensors in earth observation. Keywords: Land cover, classification, citizen science, Landsat 8

  6. Object based image analysis for the classification of the growth stages of Avocado crop, in Michoacán State, Mexico

    Science.gov (United States)

    Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.

    2014-11-01

    This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.

  7. Land cover and forest formation distributions for St. Kitts, Nevis, St. Eustatius, Grenada and Barbados from decision tree classification of cloud-cleared satellite imagery

    Science.gov (United States)

    Helmer, E.H.; Kennaway, T.A.; Pedreros, D.H.; Clark, M.L.; Marcano-Vega, H.; Tieszen, L.L.; Ruzycki, T.R.; Schill, S.R.; Carrington, C.M.S.

    2008-01-01

    Satellite image-based mapping of tropical forests is vital to conservation planning. Standard methods for automated image classification, however, limit classification detail in complex tropical landscapes. In this study, we test an approach to Landsat image interpretation on four islands of the Lesser Antilles, including Grenada and St. Kitts, Nevis and St. Eustatius, testing a more detailed classification than earlier work in the latter three islands. Secondly, we estimate the extents of land cover and protected forest by formation for five islands and ask how land cover has changed over the second half of the 20th century. The image interpretation approach combines image mosaics and ancillary geographic data, classifying the resulting set of raster data with decision tree software. Cloud-free image mosaics for one or two seasons were created by applying regression tree normalization to scene dates that could fill cloudy areas in a base scene. Such mosaics are also known as cloud-filled, cloud-minimized or cloud-cleared imagery, mosaics, or composites. The approach accurately distinguished several classes that more standard methods would confuse; the seamless mosaics aided reference data collection; and the multiseason imagery allowed us to separate drought deciduous forests and woodlands from semi-deciduous ones. Cultivated land areas declined 60 to 100 percent from about 1945 to 2000 on several islands. Meanwhile, forest cover has increased 50 to 950%. This trend will likely continue where sugar cane cultivation has dominated. Like the island of Puerto Rico, most higher-elevation forest formations are protected in formal or informal reserves. Also similarly, lowland forests, which are drier forest types on these islands, are not well represented in reserves. Former cultivated lands in lowland areas could provide lands for new reserves of drier forest types. The land-use history of these islands may provide insight for planners in countries currently considering

  8. Convolutional deep belief network with feature encoding for classification of neuroblastoma histological images

    Directory of Open Access Journals (Sweden)

    Soheila Gheisari

    2018-01-01

    Full Text Available Background: Neuroblastoma is the most common extracranial solid tumor in children younger than 5 years old. Optimal management of neuroblastic tumors depends on many factors including histopathological classification. The gold standard for classification of neuroblastoma histological images is visual microscopic assessment. In this study, we propose and evaluate a deep learning approach to classify high-resolution digital images of neuroblastoma histology into five different classes determined by the Shimada classification. Subjects and Methods: We apply a combination of convolutional deep belief network (CDBN with feature encoding algorithm that automatically classifies digital images of neuroblastoma histology into five different classes. We design a three-layer CDBN to extract high-level features from neuroblastoma histological images and combine with a feature encoding model to extract features that are highly discriminative in the classification task. The extracted features are classified into five different classes using a support vector machine classifier. Data: We constructed a dataset of 1043 neuroblastoma histological images derived from Aperio scanner from 125 patients representing different classes of neuroblastoma tumors. Results: The weighted average F-measure of 86.01% was obtained from the selected high-level features, outperforming state-of-the-art methods. Conclusion: The proposed computer-aided classification system, which uses the combination of deep architecture and feature encoding to learn high-level features, is highly effective in the classification of neuroblastoma histological images.

  9. Classification of ADHD children through multimodal Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Dai eDai

    2012-09-01

    Full Text Available Attention deficit/hyperactivity disorder (ADHD is one of the most common diseases in school-age children. To date, the diagnosis of ADHD is mainly subjective and studies of objective diagnostic method are of great importance. Although many efforts have been made recently to investigate the use of structural and functional brain images for the diagnosis purpose, few of them are related to ADHD. In this paper, we introduce an automatic classification framework based on brain imaging features of ADHD patients, and present in detail the feature extraction, feature selection and classifier training methods. The effects of using different features are compared against each other. In addition, we integrate multimodal image features using multi-kernel learning (MKL. The performance of our framework has been validated in the ADHD-200 Global Competition, which is a world-wide classification contest on the ADHD-200 datasets. In this competition, our classification framework using features of resting-state functional connectivity was ranked the 6th out of 21 participants under the competition scoring policy, and performed the best in terms of sensitivity and J-statistic.

  10. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  11. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  12. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  13. Supervised Self-Organizing Classification of Superresolution ISAR Images: An Anechoic Chamber Experiment

    Directory of Open Access Journals (Sweden)

    Radoi Emanuel

    2006-01-01

    Full Text Available The problem of the automatic classification of superresolution ISAR images is addressed in the paper. We describe an anechoic chamber experiment involving ten-scale-reduced aircraft models. The radar images of these targets are reconstructed using MUSIC-2D (multiple signal classification method coupled with two additional processing steps: phase unwrapping and symmetry enhancement. A feature vector is then proposed including Fourier descriptors and moment invariants, which are calculated from the target shape and the scattering center distribution extracted from each reconstructed image. The classification is finally performed by a new self-organizing neural network called SART (supervised ART, which is compared to two standard classifiers, MLP (multilayer perceptron and fuzzy KNN ( nearest neighbors. While the classification accuracy is similar, SART is shown to outperform the two other classifiers in terms of training speed and classification speed, especially for large databases. It is also easier to use since it does not require any input parameter related to its structure.

  14. G0-WISHART Distribution Based Classification from Polarimetric SAR Images

    Science.gov (United States)

    Hu, G. C.; Zhao, Q. H.

    2017-09-01

    Enormous scientific and technical developments have been carried out to further improve the remote sensing for decades, particularly Polarimetric Synthetic Aperture Radar(PolSAR) technique, so classification method based on PolSAR images has getted much more attention from scholars and related department around the world. The multilook polarmetric G0-Wishart model is a more flexible model which describe homogeneous, heterogeneous and extremely heterogeneous regions in the image. Moreover, the polarmetric G0-Wishart distribution dose not include the modified Bessel function of the second kind. It is a kind of simple statistical distribution model with less parameter. To prove its feasibility, a process of classification has been tested with the full-polarized Synthetic Aperture Radar (SAR) image by the method. First, apply multilook polarimetric SAR data process and speckle filter to reduce speckle influence for classification result. Initially classify the image into sixteen classes by H/A/α decomposition. Using the ICM algorithm to classify feature based on the G0-Wshart distance. Qualitative and quantitative results show that the proposed method can classify polaimetric SAR data effectively and efficiently.

  15. Automatic Segmentation of Dermoscopic Images by Iterative Classification

    Directory of Open Access Journals (Sweden)

    Maciel Zortea

    2011-01-01

    Full Text Available Accurate detection of the borders of skin lesions is a vital first step for computer aided diagnostic systems. This paper presents a novel automatic approach to segmentation of skin lesions that is particularly suitable for analysis of dermoscopic images. Assumptions about the image acquisition, in particular, the approximate location and color, are used to derive an automatic rule to select small seed regions, likely to correspond to samples of skin and the lesion of interest. The seed regions are used as initial training samples, and the lesion segmentation problem is treated as binary classification problem. An iterative hybrid classification strategy, based on a weighted combination of estimated posteriors of a linear and quadratic classifier, is used to update both the automatically selected training samples and the segmentation, increasing reliability and final accuracy, especially for those challenging images, where the contrast between the background skin and lesion is low.

  16. MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH

    Data.gov (United States)

    National Aeronautics and Space Administration — MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Multispectral remote sensing images have...

  17. Apprentissage de connaissances structurelles pour la classification automatique d’images satellitaires dans un environnement amazonien

    OpenAIRE

    Bayoudh , Meriam

    2013-01-01

    Classical methods for satellite image analysis appear inadequate for the current bulky data flow. Thus, makingthe interpretation of such images automatic becomes crucial for the analysis and management of phenomenachanging in time and space, observable by satellite. Consequently, this work aims to contribute to the dyna-mic land cover cartography from satellite images, by expressive and easily interpretable mechanisms, and byexplicitly taking into account structural aspects of geographic info...

  18. Deep multi-scale convolutional neural network for hyperspectral image classification

    Science.gov (United States)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  19. Gaze Embeddings for Zero-Shot Image Classification

    NARCIS (Netherlands)

    Karessli, N.; Akata, Z.; Schiele, B.; Bulling, A.

    2017-01-01

    Zero-shot image classification using auxiliary information, such as attributes describing discriminative object properties, requires time-consuming annotation by domain experts. We instead propose a method that relies on human gaze as auxiliary information, exploiting that even non-expert users have

  20. Automatic plankton image classification combining multiple view features via multiple kernel learning.

    Science.gov (United States)

    Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing

    2017-12-28

    Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system

  1. Analysis and Evaluation of IKONOS Image Fusion Algorithm Based on Land Cover Classification

    Institute of Scientific and Technical Information of China (English)

    Xia; JING; Yan; BAO

    2015-01-01

    Different fusion algorithm has its own advantages and limitations,so it is very difficult to simply evaluate the good points and bad points of the fusion algorithm. Whether an algorithm was selected to fuse object images was also depended upon the sensor types and special research purposes. Firstly,five fusion methods,i. e. IHS,Brovey,PCA,SFIM and Gram-Schmidt,were briefly described in the paper. And then visual judgment and quantitative statistical parameters were used to assess the five algorithms. Finally,in order to determine which one is the best suitable fusion method for land cover classification of IKONOS image,the maximum likelihood classification( MLC) was applied using the above five fusion images. The results showed that the fusion effect of SFIM transform and Gram-Schmidt transform were better than the other three image fusion methods in spatial details improvement and spectral information fidelity,and Gram-Schmidt technique was superior to SFIM transform in the aspect of expressing image details. The classification accuracy of the fused image using Gram-Schmidt and SFIM algorithms was higher than that of the other three image fusion methods,and the overall accuracy was greater than 98%. The IHS-fused image classification accuracy was the lowest,the overall accuracy and kappa coefficient were 83. 14% and 0. 76,respectively. Thus the IKONOS fusion images obtained by the Gram-Schmidt and SFIM were better for improving the land cover classification accuracy.

  2. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  3. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng; Xie, Qing; Zhu, Yonghua; Liu, Xingyi; Zhang, Shichao

    2015-01-01

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple

  4. Spatial and Spectral Hybrid Image Classification for Rice Lodging Assessment through UAV Imagery

    Directory of Open Access Journals (Sweden)

    Ming-Der Yang

    2017-06-01

    Full Text Available Rice lodging identification relies on manual in situ assessment and often leads to a compensation dispute in agricultural disaster assessment. Therefore, this study proposes a comprehensive and efficient classification technique for agricultural lands that entails using unmanned aerial vehicle (UAV imagery. In addition to spectral information, digital surface model (DSM and texture information of the images was obtained through image-based modeling and texture analysis. Moreover, single feature probability (SFP values were computed to evaluate the contribution of spectral and spatial hybrid image information to classification accuracy. The SFP results revealed that texture information was beneficial for the classification of rice and water, DSM information was valuable for lodging and tree classification, and the combination of texture and DSM information was helpful in distinguishing between artificial surface and bare land. Furthermore, a decision tree classification model incorporating SFP values yielded optimal results, with an accuracy of 96.17% and a Kappa value of 0.941, compared with that of a maximum likelihood classification model (90.76%. The rice lodging ratio in paddies at the study site was successfully identified, with three paddies being eligible for disaster relief. The study demonstrated that the proposed spatial and spectral hybrid image classification technology is a promising tool for rice lodging assessment.

  5. FFT-enhanced IHS transform method for fusing high-resolution satellite images

    Science.gov (United States)

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2007-01-01

    Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  6. Performance Evaluation of Three Different High Resolution Satellite Images in Semi-Automatic Urban Illegal Building Detection

    Science.gov (United States)

    Khalilimoghadama, N.; Delavar, M. R.; Hanachi, P.

    2017-09-01

    The problem of overcrowding of mega cities has been bolded in recent years. To meet the need of housing this increased population, which is of great importance in mega cities, a huge number of buildings are constructed annually. With the ever-increasing trend of building constructions, we are faced with the growing trend of building infractions and illegal buildings (IBs). Acquiring multi-temporal satellite images and using change detection techniques is one of the proper methods of IB monitoring. Using the type of satellite images with different spatial and spectral resolutions has always been an issue in efficient detection of the building changes. In this research, three bi-temporal high-resolution satellite images of IRS-P5, GeoEye-1 and QuickBird sensors acquired from the west of metropolitan area of Tehran, capital of Iran, in addition to city maps and municipality property database were used to detect the under construction buildings with improved performance and accuracy. Furthermore, determining the employed bi-temporal satellite images to provide better performance and accuracy in the case of IB detection is the other purpose of this research. The Kappa coefficients of 70 %, 64 %, and 68 % were obtained for producing change image maps using GeoEye-1, IRS-P5, and QuickBird satellite images, respectively. In addition, the overall accuracies of 100 %, 6 %, and 83 % were achieved for IB detection using the satellite images, respectively. These accuracies substantiate the fact that the GeoEye-1 satellite images had the best performance among the employed images in producing change image map and detecting the IBs.

  7. APPLICATION OF FUSION WITH SAR AND OPTICAL IMAGES IN LAND USE CLASSIFICATION BASED ON SVM

    Directory of Open Access Journals (Sweden)

    C. Bao

    2012-07-01

    Full Text Available As the increment of remote sensing data with multi-space resolution, multi-spectral resolution and multi-source, data fusion technologies have been widely used in geological fields. Synthetic Aperture Radar (SAR and optical camera are two most common sensors presently. The multi-spectral optical images express spectral features of ground objects, while SAR images express backscatter information. Accuracy of the image classification could be effectively improved fusing the two kinds of images. In this paper, Terra SAR-X images and ALOS multi-spectral images were fused for land use classification. After preprocess such as geometric rectification, radiometric rectification noise suppression and so on, the two kind images were fused, and then SVM model identification method was used for land use classification. Two different fusion methods were used, one is joining SAR image into multi-spectral images as one band, and the other is direct fusing the two kind images. The former one can raise the resolution and reserve the texture information, and the latter can reserve spectral feature information and improve capability of identifying different features. The experiment results showed that accuracy of classification using fused images is better than only using multi-spectral images. Accuracy of classification about roads, habitation and water bodies was significantly improved. Compared to traditional classification method, the method of this paper for fused images with SVM classifier could achieve better results in identifying complicated land use classes, especially for small pieces ground features.

  8. Ship Detection and Classification on Optical Remote Sensing Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Liu Ying

    2017-01-01

    Full Text Available Ship detection and classification is critical for national maritime security and national defense. Although some SAR (Synthetic Aperture Radar image-based ship detection approaches have been proposed and used, they are not able to satisfy the requirement of real-world applications as the number of SAR sensors is limited, the resolution is low, and the revisit cycle is long. As massive optical remote sensing images of high resolution are available, ship detection and classification on theses images is becoming a promising technique, and has attracted great attention on applications including maritime security and traffic control. Some digital image processing methods have been proposed to detect ships in optical remote sensing images, but most of them face difficulty in terms of accuracy, performance and complexity. Recently, an autoencoder-based deep neural network with extreme learning machine was proposed, but it cannot meet the requirement of real-world applications as it only works with simple and small-scaled data sets. Therefore, in this paper, we propose a novel ship detection and classification approach which utilizes deep convolutional neural network (CNN as the ship classifier. The performance of our proposed ship detection and classification approach was evaluated on a set of images downloaded from Google Earth at the resolution 0.5m. 99% detection accuracy and 95% classification accuracy were achieved. In model training, 75× speedup is achieved on 1 Nvidia Titanx GPU.

  9. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  10. Improving settlement type classification of aerial images

    CSIR Research Space (South Africa)

    Mdakane, L

    2014-10-01

    Full Text Available , an automated method can be used to help identify human settlements in a fixed, repeatable and timely manner. The main contribution of this work is to improve generalisation on settlement type classification of aerial imagery. Images acquired at different dates...

  11. Mapping species of submerged aquatic vegetation with multi-seasonal satellite images and considering life history information

    Science.gov (United States)

    Luo, Juhua; Duan, Hongtao; Ma, Ronghua; Jin, Xiuliang; Li, Fei; Hu, Weiping; Shi, Kun; Huang, Wenjiang

    2017-05-01

    Spatial information of the dominant species of submerged aquatic vegetation (SAV) is essential for restoration projects in eutrophic lakes, especially eutrophic Taihu Lake, China. Mapping the distribution of SAV species is very challenging and difficult using only multispectral satellite remote sensing. In this study, we proposed an approach to map the distribution of seven dominant species of SAV in Taihu Lake. Our approach involved information on the life histories of the seven SAV species and eight distribution maps of SAV from February to October. The life history information of the dominant SAV species was summarized from the literature and field surveys. Eight distribution maps of the SAV were extracted from eight 30 m HJ-CCD images from February to October in 2013 based on the classification tree models, and the overall classification accuracies for the SAV were greater than 80%. Finally, the spatial distribution of the SAV species in Taihu in 2013 was mapped using multilayer erasing approach. Based on validation, the overall classification accuracy for the seven species was 68.4%, and kappa was 0.6306, which suggests that larger differences in life histories between species can produce higher identification accuracies. The classification results show that Potamogeton malaianus was the most widely distributed species in Taihu Lake, followed by Myriophyllum spicatum, Potamogeton maackianus, Potamogeton crispus, Elodea nuttallii, Ceratophyllum demersum and Vallisneria spiralis. The information is useful for planning shallow-water habitat restoration projects.

  12. Classification of MR brain images by combination of multi-CNNs for AD diagnosis

    Science.gov (United States)

    Cheng, Danni; Liu, Manhua; Fu, Jianliang; Wang, Yaping

    2017-07-01

    Alzheimer's disease (AD) is an irreversible neurodegenerative disorder with progressive impairment of memory and cognitive functions. Its early diagnosis is crucial for development of future treatment. Magnetic resonance images (MRI) play important role to help understand the brain anatomical changes related to AD. Conventional methods extract the hand-crafted features such as gray matter volumes and cortical thickness and train a classifier to distinguish AD from other groups. Different from these methods, this paper proposes to construct multiple deep 3D convolutional neural networks (3D-CNNs) to learn the various features from local brain images which are combined to make the final classification for AD diagnosis. First, a number of local image patches are extracted from the whole brain image and a 3D-CNN is built upon each local patch to transform the local image into more compact high-level features. Then, the upper convolution and fully connected layers are fine-tuned to combine the multiple 3D-CNNs for image classification. The proposed method can automatically learn the generic features from imaging data for classification. Our method is evaluated using T1-weighted structural MR brain images on 428 subjects including 199 AD patients and 229 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 87.15% and an AUC (area under the ROC curve) of 92.26% for AD classification, demonstrating the promising classification performances.

  13. Very high resolution satellite data: New challenges in image analysis

    Digital Repository Service at National Institute of Oceanography (India)

    Sathe, P.V.; Muraleedharan, P.M.

    with the exception that a ground-based view covers the entire optical range from 400 to 700 nm while satellite images will be wavelength-specific. Although the images will not surpass details observed by a human eye, they will, in principle, be comparable with aerial...

  14. Combined Kernel-Based BDT-SMO Classification of Hyperspectral Fused Images

    Directory of Open Access Journals (Sweden)

    Fenghua Huang

    2014-01-01

    Full Text Available To solve the poor generalization and flexibility problems that single kernel SVM classifiers have while classifying combined spectral and spatial features, this paper proposed a solution to improve the classification accuracy and efficiency of hyperspectral fused images: (1 different radial basis kernel functions (RBFs are employed for spectral and textural features, and a new combined radial basis kernel function (CRBF is proposed by combining them in a weighted manner; (2 the binary decision tree-based multiclass SMO (BDT-SMO is used in the classification of hyperspectral fused images; (3 experiments are carried out, where the single radial basis function- (SRBF- based BDT-SMO classifier and the CRBF-based BDT-SMO classifier are used, respectively, to classify the land usages of hyperspectral fused images, and genetic algorithms (GA are used to optimize the kernel parameters of the classifiers. The results show that, compared with SRBF, CRBF-based BDT-SMO classifiers display greater classification accuracy and efficiency.

  15. Landsat TM and ETM+ Kansas Satellite Image Database (KSID)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Satellite Image Database (KSID):2000-2001 consists of terrain-corrected, precision rectified spring, summer, and fall Landsat 5 Thematic Mapper (TM) and...

  16. Three-dimensional imaging of acetabular dysplasia: diagnostic value and impact on surgical type classification

    Energy Technology Data Exchange (ETDEWEB)

    Smet, Maria-Helena E-mail: marleen.smet@uz.kuleuven.ac.be; Marchal, Guy J.; Baert, Albert L.; Hoe, Lieven van; Cleynenbreugel, Johan van; Daniels, Hans; Molenaers, Guy; Moens, Pierre; Fabry, Guy

    2000-04-01

    Objective: To investigate the diagnostic value and the impact on surgical type classification of three-dimensional (3D) images for pre-surgical evaluation of dysplastic hips. Materials and methods: Three children with a different surgical type of hip dysplasia were investigated with helical computed tomography. For each patient, two-dimensional (2D) images, 3D, and a stereolithographic model of the dysplastic hip were generated. In two separate sessions, 40 medical observers independently analyzed the 2D images (session 1), the 2D and 3D images (session 2), and tried to identify the corresponding stereolithographic hip model. The influence of both image presentation (2D versus 3D images) and observer (degree of experience, radiologist versus orthopedic surgeon) were statistically analyzed. The SL model choice reflected the impact on surgical type classification. Results: Image presentation was a significant factor whereas the individual observer was not. Three-dimensional images scored significantly better than 2D images (P=0.0003). Three-dimensional imaging increased the correct surgical type classification by 35%. Conclusion: Three-dimensional images significantly improve the pre-surgical diagnostic assessment and surgical type classification of dysplastic hips.

  17. Three-dimensional imaging of acetabular dysplasia: diagnostic value and impact on surgical type classification

    International Nuclear Information System (INIS)

    Smet, Maria-Helena; Marchal, Guy J.; Baert, Albert L.; Hoe, Lieven van; Cleynenbreugel, Johan van; Daniels, Hans; Molenaers, Guy; Moens, Pierre; Fabry, Guy

    2000-01-01

    Objective: To investigate the diagnostic value and the impact on surgical type classification of three-dimensional (3D) images for pre-surgical evaluation of dysplastic hips. Materials and methods: Three children with a different surgical type of hip dysplasia were investigated with helical computed tomography. For each patient, two-dimensional (2D) images, 3D, and a stereolithographic model of the dysplastic hip were generated. In two separate sessions, 40 medical observers independently analyzed the 2D images (session 1), the 2D and 3D images (session 2), and tried to identify the corresponding stereolithographic hip model. The influence of both image presentation (2D versus 3D images) and observer (degree of experience, radiologist versus orthopedic surgeon) were statistically analyzed. The SL model choice reflected the impact on surgical type classification. Results: Image presentation was a significant factor whereas the individual observer was not. Three-dimensional images scored significantly better than 2D images (P=0.0003). Three-dimensional imaging increased the correct surgical type classification by 35%. Conclusion: Three-dimensional images significantly improve the pre-surgical diagnostic assessment and surgical type classification of dysplastic hips

  18. Ship-Iceberg Discrimination in Sentinel-2 Multispectral Imagery by Supervised Classification

    Directory of Open Access Journals (Sweden)

    Peder Heiselberg

    2017-11-01

    Full Text Available The European Space Agency Sentinel-2 satellites provide multispectral images with pixel sizes down to 10 m. This high resolution allows for fast and frequent detection, classification and discrimination of various objects in the sea, which is relevant in general and specifically for the vast Arctic environment. We analyze several sets of multispectral image data from Denmark and Greenland fall and winter, and describe a supervised search and classification algorithm based on physical parameters that successfully finds and classifies all objects in the sea with reflectance above a threshold. It discriminates between objects like ships, islands, wakes, and icebergs, ice floes, and clouds with accuracy better than 90%. Pan-sharpening the infrared bands leads to classification and discrimination of ice floes and clouds better than 95%. For complex images with abundant ice floes or clouds, however, the false alarm rate dominates for small non-sailing boats.

  19. Comparison Effectiveness of Pixel Based Classification and Object Based Classification Using High Resolution Image In Floristic Composition Mapping (Study Case: Gunung Tidar Magelang City)

    Science.gov (United States)

    Ardha Aryaguna, Prama; Danoedoro, Projo

    2016-11-01

    Developments of analysis remote sensing have same way with development of technology especially in sensor and plane. Now, a lot of image have high spatial and radiometric resolution, that's why a lot information. Vegetation object analysis such floristic composition got a lot advantage of that development. Floristic composition can be interpreted using a lot of method such pixel based classification and object based classification. The problems for pixel based method on high spatial resolution image are salt and paper who appear in result of classification. The purpose of this research are compare effectiveness between pixel based classification and object based classification for composition vegetation mapping on high resolution image Worldview-2. The results show that pixel based classification using majority 5×5 kernel windows give the highest accuracy between another classifications. The highest accuracy is 73.32% from image Worldview-2 are being radiometric corrected level surface reflectance, but for overall accuracy in every class, object based are the best between another methods. Reviewed from effectiveness aspect, pixel based are more effective then object based for vegetation composition mapping in Tidar forest.

  20. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    Science.gov (United States)

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  1. Classification and overview of research in real-time imaging

    Science.gov (United States)

    Sinha, Purnendu; Gorinsky, Sergey V.; Laplante, Phillip A.; Stoyenko, Alexander D.; Marlowe, Thomas J.

    1996-10-01

    Real-time imaging has application in areas such as multimedia, virtual reality, medical imaging, and remote sensing and control. Recently, the imaging community has witnessed a tremendous growth in research and new ideas in these areas. To lend structure to this growth, we outline a classification scheme and provide an overview of current research in real-time imaging. For convenience, we have categorized references by research area and application.

  2. REMOTE SENSING IMAGE CLASSIFICATION APPLIED TO THE FIRST NATIONAL GEOGRAPHICAL INFORMATION CENSUS OF CHINA

    Directory of Open Access Journals (Sweden)

    X. Yu

    2016-06-01

    Full Text Available Image classification will still be a long way in the future, although it has gone almost half a century. In fact, researchers have gained many fruits in the image classification domain, but there is still a long distance between theory and practice. However, some new methods in the artificial intelligence domain will be absorbed into the image classification domain and draw on the strength of each to offset the weakness of the other, which will open up a new prospect. Usually, networks play the role of a high-level language, as is seen in Artificial Intelligence and statistics, because networks are used to build complex model from simple components. These years, Bayesian Networks, one of probabilistic networks, are a powerful data mining technique for handling uncertainty in complex domains. In this paper, we apply Tree Augmented Naive Bayesian Networks (TAN to texture classification of High-resolution remote sensing images and put up a new method to construct the network topology structure in terms of training accuracy based on the training samples. Since 2013, China government has started the first national geographical information census project, which mainly interprets geographical information based on high-resolution remote sensing images. Therefore, this paper tries to apply Bayesian network to remote sensing image classification, in order to improve image interpretation in the first national geographical information census project. In the experiment, we choose some remote sensing images in Beijing. Experimental results demonstrate TAN outperform than Naive Bayesian Classifier (NBC and Maximum Likelihood Classification Method (MLC in the overall classification accuracy. In addition, the proposed method can reduce the workload of field workers and improve the work efficiency. Although it is time consuming, it will be an attractive and effective method for assisting office operation of image interpretation.

  3. TREE SPECIES CLASSIFICATION OF BROADLEAVED FORESTS IN NAGANO, CENTRAL JAPAN, USING AIRBORNE LASER DATA AND MULTISPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    S. Deng

    2017-10-01

    Full Text Available This study attempted to classify three coniferous and ten broadleaved tree species by combining airborne laser scanning (ALS data and multispectral images. The study area, located in Nagano, central Japan, is within the broadleaved forests of the Afan Woodland area. A total of 235 trees were surveyed in 2016, and we recorded the species, DBH, and tree height. The geographical position of each tree was collected using a Global Navigation Satellite System (GNSS device. Tree crowns were manually detected using GNSS position data, field photographs, true-color orthoimages with three bands (red-green-blue, RGB, 3D point clouds, and a canopy height model derived from ALS data. Then a total of 69 features, including 27 image-based and 42 point-based features, were extracted from the RGB images and the ALS data to classify tree species. Finally, the detected tree crowns were classified into two classes for the first level (coniferous and broadleaved trees, four classes for the second level (Pinus densiflora, Larix kaempferi, Cryptomeria japonica, and broadleaved trees, and 13 classes for the third level (three coniferous and ten broadleaved species, using the 27 image-based features, 42 point-based features, all 69 features, and the best combination of features identified using a neighborhood component analysis algorithm, respectively. The overall classification accuracies reached 90 % at the first and second levels but less than 60 % at the third level. The classifications using the best combinations of features had higher accuracies than those using the image-based and point-based features and the combination of all of the 69 features.

  4. Hyperspectral Image Classification Using Kernel Fukunaga-Koontz Transform

    Directory of Open Access Journals (Sweden)

    Semih Dinç

    2013-01-01

    images. In experiment section, the improved performance of HSI classification technique, K-FKT, has been tested comparing other methods such as the classical FKT and three types of support vector machines (SVMs.

  5. Detection of jet contrails from satellite images

    Science.gov (United States)

    Meinert, Dieter

    1994-02-01

    In order to investigate the influence of modern technology on the world climate it is important to have automatic detection methods for man-induced parameters. In this case the influence of jet contrails on the greenhouse effect shall be investigated by means of images from polar orbiting satellites. Current methods of line recognition and amplification cannot distinguish between contrails and rather sharp edges of natural cirrus or noise. They still rely on human control. Through the combination of different methods from cloud physics, image comparison, pattern recognition, and artificial intelligence we try to overcome this handicap. Here we will present the basic methods applied to each image frame, and list preliminary results derived this way.

  6. METEOROLOGICAL SATELLITE IMAGES IN GEOGRAPHY CLASSES: a didactic possibility

    Directory of Open Access Journals (Sweden)

    Diego Correia Maia

    2016-01-01

    Full Text Available ABSTRACT: The satellite images are still largely unexplored as didactic resource in geography classes, particularly about meteorology. This article aims to contribute to the development of new methodologies of interpretation and understanding, beyond the construction of pedagogical practices involving meteorological satellite images, concepts and issues related to climate issues. The aim of this paper is to present possibilities for the use of meteorological satellite images in the Teaching of Geography, aiming the promoting and the understanding of contents of air masses and fronts and climatic factors. RESUMO: As imagens de satélite ainda são pouco exploradas como recurso didático nas aulas de Geografia, principalmente aquelas relativas à meteorologia. Este artigo visa contribuir com o desenvolvimento de novas metodologias de interpretação e compreensão, além da construção de práticas pedagógicas envolvendo imagens de satélite meteorológico, conceitos e temas ligados às questões climáticas. Seu objetivo é apresentar possibilidades de utilização das imagens de satélite meteorológico no Ensino de Geografia, visando à promoção e ao entendimento dos conteúdos de massas de ar e frentes e de elementos climáticos. Palavras chave

  7. VHR satellite imagery for humanitarian crisis management: a case study

    Science.gov (United States)

    Bitelli, Gabriele; Eleias, Magdalena; Franci, Francesca; Mandanici, Emanuele

    2017-09-01

    During the last years, remote sensing data along with GIS have been largely employed for supporting emergency management activities. In this context, the use of satellite images and derived map products has become more common also in the different phases of humanitarian crisis response. In this work very high resolution satellite imagery was processed to assess the evolution of Za'atari Refugee Camp, built in Jordan in 2012 by the UN Refugee Agency to host Syrian refugees. Multispectral satellite scenes of the Za'atari area were processed by means of object-based classifications. The main aim of the present work is the development of a semiautomated procedure for multi-temporal camp monitoring with particular reference to the dwellings detection. Whilst in the emergency mapping domain automation of feature extraction is widely investigated, in the field of humanitarian missions the information is often extracted by means of photointerpretation of the satellite data. This approach requires time for the interpretation; moreover, it is not reliable enough in complex situations, where features of interest are often small, heterogeneous and inconsistent. Therefore, the present paper discusses a methodology to obtain information for assisting humanitarian crisis management, using a semi-automatic classification approach applied to satellite imagery.

  8. Development of an image processing system at the Technology Applications Center, UNM: Landsat image processing in mineral exploration and related activities. Final report

    International Nuclear Information System (INIS)

    Budge, T.K.

    1980-09-01

    This project was a demonstration of the capabilities of Landsat satellite image processing applied to the monitoring of mining activity in New Mexico. Study areas included the Navajo coal surface mine, the Jackpile uranium surface mine, and the potash mining district near Carlsbad, New Mexico. Computer classifications of a number of land use categories in these mines were presented and discussed. A literature review of a number of case studies concerning the use of Landsat image processing in mineral exploration and related activities was prepared. Included in this review is a discussion of the Landsat satellite system and the basics of computer image processing. Topics such as destriping, contrast stretches, atmospheric corrections, ratioing, and classification techniques are addressed. Summaries of the STANSORT II and ELAS software packages and the Technology Application Center's Digital Image Processing System (TDIPS) are presented

  9. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    Science.gov (United States)

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  10. Interactive classification and content-based retrieval of tissue images

    Science.gov (United States)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  11. Automated retinal vessel type classification in color fundus images

    Science.gov (United States)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  12. Biomedical Imaging Modality Classification Using Combined Visual Features and Textual Terms

    Directory of Open Access Journals (Sweden)

    Xian-Hua Han

    2011-01-01

    extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  13. The Royal College of Radiologists Breast Group breast imaging classification

    International Nuclear Information System (INIS)

    Maxwell, A.J.; Ridley, N.T.; Rubin, G.; Wallis, M.G.; Gilbert, F.J.; Michell, M.J.

    2009-01-01

    Standardisation of the classification of breast imaging reports will improve communication between the referrer and the radiologist and avoid ambiguity, which may otherwise lead to mismanagement of patients. Following wide consultation, Royal College of Radiologists Breast Group has produced a scoring system for the classification of breast imaging. This will facilitate audit and the development of nationally agreed standards for the investigation of women with breast disease. This five-point system is as follows: 1, normal; 2, benign findings; 3, indeterminate/probably benign findings; 4, findings suspicious of malignancy; 5, findings highly suspicious of malignancy. It is recommended that this be used in the reporting of all breast imaging examinations in the UK.

  14. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    Science.gov (United States)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  15. How automated image analysis techniques help scientists in species identification and classification?

    Science.gov (United States)

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  16. Utility of multispectral imaging for nuclear classification of routine clinical histopathology imagery

    Directory of Open Access Journals (Sweden)

    Harvey Neal R

    2007-07-01

    Full Text Available Abstract Background We present an analysis of the utility of multispectral versus standard RGB imagery for routine H&E stained histopathology images, in particular for pixel-level classification of nuclei. Our multispectral imagery has 29 spectral bands, spaced 10 nm within the visual range of 420–700 nm. It has been hypothesized that the additional spectral bands contain further information useful for classification as compared to the 3 standard bands of RGB imagery. We present analyses of our data designed to test this hypothesis. Results For classification using all available image bands, we find the best performance (equal tradeoff between detection rate and false alarm rate is obtained from either the multispectral or our "ccd" RGB imagery, with an overall increase in performance of 0.79% compared to the next best performing image type. For classification using single image bands, the single best multispectral band (in the red portion of the spectrum gave a performance increase of 0.57%, compared to performance of the single best RGB band (red. Additionally, red bands had the highest coefficients/preference in our classifiers. Principal components analysis of the multispectral imagery indicates only two significant image bands, which is not surprising given the presence of two stains. Conclusion Our results indicate that multispectral imagery for routine H&E stained histopathology provides minimal additional spectral information for a pixel-level nuclear classification task than would standard RGB imagery.

  17. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    Directory of Open Access Journals (Sweden)

    Guizhou Wang

    2013-01-01

    Full Text Available This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine. Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy.

  18. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    Science.gov (United States)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  19. Combining low level features and visual attributes for VHR remote sensing image classification

    Science.gov (United States)

    Zhao, Fumin; Sun, Hao; Liu, Shuai; Zhou, Shilin

    2015-12-01

    Semantic classification of very high resolution (VHR) remote sensing images is of great importance for land use or land cover investigation. A large number of approaches exploiting different kinds of low level feature have been proposed in the literature. Engineers are often frustrated by their conclusions and a systematic assessment of various low level features for VHR remote sensing image classification is needed. In this work, we firstly perform an extensive evaluation of eight features including HOG, dense SIFT, SSIM, GIST, Geo color, LBP, Texton and Tiny images for classification of three public available datasets. Secondly, we propose to transfer ground level scene attributes to remote sensing images. Thirdly, we combine both low-level features and mid-level visual attributes to further improve the classification performance. Experimental results demonstrate that i) Dene SIFT and HOG features are more robust than other features for VHR scene image description. ii) Visual attribute competes with a combination of low level features. iii) Multiple feature combination achieves the best performance under different settings.

  20. Geostationary Satellite (GOES) Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Visible and Infrared satellite imagery taken from radiometer instruments on SMS (ATS) and GOES satellites in geostationary orbit. These satellites produced...

  1. Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information

    Science.gov (United States)

    Jamshidpour, N.; Homayouni, S.; Safari, A.

    2017-09-01

    Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  2. GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

    Directory of Open Access Journals (Sweden)

    N. Jamshidpour

    2017-09-01

    Full Text Available Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  3. Pixel Classification of SAR ice images using ANFIS-PSO Classifier

    Directory of Open Access Journals (Sweden)

    G. Vasumathi

    2016-12-01

    Full Text Available Synthetic Aperture Radar (SAR is playing a vital role in taking extremely high resolution radar images. It is greatly used to monitor the ice covered ocean regions. Sea monitoring is important for various purposes which includes global climate systems and ship navigation. Classification on the ice infested area gives important features which will be further useful for various monitoring process around the ice regions. Main objective of this paper is to classify the SAR ice image that helps in identifying the regions around the ice infested areas. In this paper three stages are considered in classification of SAR ice images. It starts with preprocessing in which the speckled SAR ice images are denoised using various speckle removal filters; comparison is made on all these filters to find the best filter in speckle removal. Second stage includes segmentation in which different regions are segmented using K-means and watershed segmentation algorithms; comparison is made between these two algorithms to find the best in segmenting SAR ice images. The last stage includes pixel based classification which identifies and classifies the segmented regions using various supervised learning classifiers. The algorithms includes Back propagation neural networks (BPN, Fuzzy Classifier, Adaptive Neuro Fuzzy Inference Classifier (ANFIS classifier and proposed ANFIS with Particle Swarm Optimization (PSO classifier; comparison is made on all these classifiers to propose which classifier is best suitable for classifying the SAR ice image. Various evaluation metrics are performed separately at all these three stages.

  4. Autonomous Planetary 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  5. Use of Binary Partition Tree and energy minimization for object-based classification of urban land cover

    Science.gov (United States)

    Li, Mengmeng; Bijker, Wietske; Stein, Alfred

    2015-04-01

    Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.

  6. Hyperspectral Image Classification Using Discriminative Dictionary Learning

    International Nuclear Information System (INIS)

    Zongze, Y; Hao, S; Kefeng, J; Huanxin, Z

    2014-01-01

    The hyperspectral image (HSI) processing community has witnessed a surge of papers focusing on the utilization of sparse prior for effective HSI classification. In sparse representation based HSI classification, there are two phases: sparse coding with an over-complete dictionary and classification. In this paper, we first apply a novel fisher discriminative dictionary learning method, which capture the relative difference in different classes. The competitive selection strategy ensures that atoms in the resulting over-complete dictionary are the most discriminative. Secondly, motivated by the assumption that spatially adjacent samples are statistically related and even belong to the same materials (same class), we propose a majority voting scheme incorporating contextual information to predict the category label. Experiment results show that the proposed method can effectively strengthen relative discrimination of the constructed dictionary, and incorporating with the majority voting scheme achieve generally an improved prediction performance

  7. Learning scale-variant and scale-invariant features for deep image classification

    NARCIS (Netherlands)

    van Noord, Nanne; Postma, Eric

    Convolutional Neural Networks (CNNs) require large image corpora to be trained on classification tasks. The variation in image resolutions, sizes of objects and patterns depicted, and image scales, hampers CNN training and performance, because the task-relevant information varies over spatial

  8. Korišćenje satelitskih snimaka za vođenje radne karte / Use of satellite images in situation map design

    Directory of Open Access Journals (Sweden)

    Miodrag D. Regodić

    2010-01-01

    represent the Earth in two dimensions, i.e. to represent a part of its surface with satisfactory precision. The beginning point is considered to be the selection of a geodetic datum as a mathematical approximation that defines the Earth's shape. Apart from that, it is used to correspondingly represent a shape on the Earth's surface and its relationship with the area of the Earth in space. Transformation is based on the postulate that a satellite image represents a projection plane where a part of the Earth's surface is projected on. To have the image initiated into the state coordinate system it is necessary to mathematically define the relation between the image projection plane and a particular part of Earth's ellipsoid and carry out necessary changes. Ikonos2 images supervised classification in Pci Geomatika program package For the image analysis and interpretation within the experiment, the supervised classification method was successfully used. The supervised classification means carrying out proposed instructions according to the established ''key'' for the multi-spectral image analysis. The identification of the same objects significant for research was accomplished visually by locating their position in the image. Along with the image classification process, both the content interpretation and the presentation of the detected and identified elements on the working map were realized. The identification and technical analysis of the detected military technical items in the image were highly facilitated due to the procedure of classification recognition. The locations of the identified tanks were mapped on TM 50 and on an already formed working map thus becoming its content element. Due to major importance of the surveyed territory, mapped objects and detected elements of combat disposition, the working map was, in this case, created on a large-scale basis (TM 50. The result of this part of the experiment shows that the determined goal was achieved, i.e. the working

  9. Image Fusion Applied to Satellite Imagery for the Improved Mapping and Monitoring of Coral Reefs: a Proposal

    Science.gov (United States)

    Gholoum, M.; Bruce, D.; Hazeam, S. Al

    2012-07-01

    A coral reef ecosystem, one of the most complex marine environmental systems on the planet, is defined as biologically diverse and immense. It plays an important role in maintaining a vast biological diversity for future generations and functions as an essential spawning, nursery, breeding and feeding ground for many kinds of marine species. In addition, coral reef ecosystems provide valuable benefits such as fisheries, ecological goods and services and recreational activities to many communities. However, this valuable resource is highly threatened by a number of environmental changes and anthropogenic impacts that can lead to reduced coral growth and production, mass coral mortality and loss of coral diversity. With the growth of these threats on coral reef ecosystems, there is a strong management need for mapping and monitoring of coral reef ecosystems. Remote sensing technology can be a valuable tool for mapping and monitoring of these ecosystems. However, the diversity and complexity of coral reef ecosystems, the resolution capabilities of satellite sensors and the low reflectivity of shallow water increases the difficulties to identify and classify its features. This paper reviews the methods used in mapping and monitoring coral reef ecosystems. In addition, this paper proposes improved methods for mapping and monitoring coral reef ecosystems based on image fusion techniques. This image fusion techniques will be applied to satellite images exhibiting high spatial and low to medium spectral resolution with images exhibiting low spatial and high spectral resolution. Furthermore, a new method will be developed to fuse hyperspectral imagery with multispectral imagery. The fused image will have a large number of spectral bands and it will have all pairs of corresponding spatial objects. This will potentially help to accurately classify the image data. Accuracy assessment use ground truth will be performed for the selected methods to determine the quality of the

  10. IMAGE FUSION APPLIED TO SATELLITE IMAGERY FOR THE IMPROVED MAPPING AND MONITORING OF CORAL REEFS: A PROPOSAL

    Directory of Open Access Journals (Sweden)

    M. Gholoum

    2012-07-01

    Full Text Available A coral reef ecosystem, one of the most complex marine environmental systems on the planet, is defined as biologically diverse and immense. It plays an important role in maintaining a vast biological diversity for future generations and functions as an essential spawning, nursery, breeding and feeding ground for many kinds of marine species. In addition, coral reef ecosystems provide valuable benefits such as fisheries, ecological goods and services and recreational activities to many communities. However, this valuable resource is highly threatened by a number of environmental changes and anthropogenic impacts that can lead to reduced coral growth and production, mass coral mortality and loss of coral diversity. With the growth of these threats on coral reef ecosystems, there is a strong management need for mapping and monitoring of coral reef ecosystems. Remote sensing technology can be a valuable tool for mapping and monitoring of these ecosystems. However, the diversity and complexity of coral reef ecosystems, the resolution capabilities of satellite sensors and the low reflectivity of shallow water increases the difficulties to identify and classify its features. This paper reviews the methods used in mapping and monitoring coral reef ecosystems. In addition, this paper proposes improved methods for mapping and monitoring coral reef ecosystems based on image fusion techniques. This image fusion techniques will be applied to satellite images exhibiting high spatial and low to medium spectral resolution with images exhibiting low spatial and high spectral resolution. Furthermore, a new method will be developed to fuse hyperspectral imagery with multispectral imagery. The fused image will have a large number of spectral bands and it will have all pairs of corresponding spatial objects. This will potentially help to accurately classify the image data. Accuracy assessment use ground truth will be performed for the selected methods to determine

  11. Hyperspectral Image Classification Based on the Combination of Spatial-spectral Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    YANG Zhaoxia

    2015-07-01

    Full Text Available In order to avoid the problem of being over-dependent on high-dimensional spectral feature in the traditional hyperspectral image classification, a novel approach based on the combination of spatial-spectral feature and sparse representation is proposed in this paper. Firstly, we extract the spatial-spectral feature by reorganizing the local image patch with the first d principal components(PCs into a vector representation, followed by a sorting scheme to make the vector invariant to local image rotation. Secondly, we learn the dictionary through a supervised method, and use it to code the features from test samples afterwards. Finally, we embed the resulting sparse feature coding into the support vector machine(SVM for hyperspectral image classification. Experiments using three hyperspectral data show that the proposed method can effectively improve the classification accuracy comparing with traditional classification methods.

  12. Geomorphology of coastal environments from satellite images

    International Nuclear Information System (INIS)

    Da Rocha Ribeiro, R.; Velho, L.; Schossler, V.

    2010-01-01

    This study aims at recognizing coastal environments supported by data from the Landsat Thematic Mapper (TM) satellite. The digital processing of images, System Information Geographic (SIG) techniques and field observation in one section of the “Província Costeira do Rio Grande do Sul” between the Rio Grande and the São Gonçalo channels - resulted in a geomorphologic profile and mapping

  13. Multi sensor satellite imagers for commercial remote sensing

    Science.gov (United States)

    Cronje, T.; Burger, H.; Du Plessis, J.; Du Toit, J. F.; Marais, L.; Strumpfer, F.

    2005-10-01

    This paper will discuss and compare recent refractive and catodioptric imager designs developed and manufactured at SunSpace for Multi Sensor Satellite Imagers with Panchromatic, Multi-spectral, Area and Hyperspectral sensors on a single Focal Plane Array (FPA). These satellite optical systems were designed with applications to monitor food supplies, crop yield and disaster monitoring in mind. The aim of these imagers is to achieve medium to high resolution (2.5m to 15m) spatial sampling, wide swaths (up to 45km) and noise equivalent reflectance (NER) values of less than 0.5%. State-of-the-art FPA designs are discussed and address the choice of detectors to achieve these performances. Special attention is given to thermal robustness and compactness, the use of folding prisms to place multiple detectors in a large FPA and a specially developed process to customize the spectral selection with the need to minimize mass, power and cost. A refractive imager with up to 6 spectral bands (6.25m GSD) and a catodioptric imager with panchromatic (2.7m GSD), multi-spectral (6 bands, 4.6m GSD), hyperspectral (400nm to 2.35μm, 200 bands, 15m GSD) sensors on the same FPA will be discussed. Both of these imagers are also equipped with real time video view finding capabilities. The electronic units could be subdivided into the Front-End Electronics and Control Electronics with analogue and digital signal processing. A dedicated Analogue Front-End is used for Correlated Double Sampling (CDS), black level correction, variable gain and up to 12-bit digitizing and high speed LVDS data link to a mass memory unit.

  14. Color-Image Classification Using MRFs for an Outdoor Mobile Robot

    Directory of Open Access Journals (Sweden)

    Moises Alencastre-Miranda

    2005-02-01

    Full Text Available In this paper, we suggest to use color-image classification (in several phases using Markov Random Fields (MRFs in order to understand natural images from outdoor environment's scenes for a mobile robot. We skip preprocessing phase having same results and better performance. In segmentation phase, we implement a color segmentation method considering I3 color space measure average in little image's cells obtained from a single split step. In classification phase, a MRF was used to identify regions as one of three selected classes; here, we consider at the same time the intrinsic color features of the image and the neighborhood system between image's cells. Finally, we use region growing and contextual information to correct misclassification errors. We have implemented and tested those phases with several images taken at our campus' gardens. We include some results in off-line processing mode and in on-line execution mode on an outdoor mobile robot. The vision system has been used for reactive exploration in an outdoor environment.

  15. A coarse-to-fine approach for medical hyperspectral image classification with sparse representation

    Science.gov (United States)

    Chang, Lan; Zhang, Mengmeng; Li, Wei

    2017-10-01

    A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.

  16. Minimisation de fonctions de perte calibrée pour la classification des images

    OpenAIRE

    Bel Haj Ali , Wafa

    2013-01-01

    Image classification becomes a big challenge since it concerns on the one hand millions or billions of images that are available on the web and on the other hand images used for critical real-time applications. This classification involves in general learning methods and classifiers that must require both precision as well as speed performance. These learning problems concern a large number of application areas: namely, web applications (profiling, targeting, social networks, search engines),...

  17. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor

    Directory of Open Access Journals (Sweden)

    Tuyen Danh Pham

    2018-02-01

    Full Text Available In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN. Experimental results on the banknote image databases of the Korean won (KRW and the Indian rupee (INR with three fitness levels, and the Unites States dollar (USD with two fitness levels, showed that our method gives better classification accuracy than other methods.

  18. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor.

    Science.gov (United States)

    Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung

    2018-02-06

    In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods.

  19. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  20. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  1. An Object-Based Image Analysis Approach for Detecting Penguin Guano in very High Spatial Resolution Satellite Images

    OpenAIRE

    Chandi Witharana; Heather J. Lynch

    2016-01-01

    The logistical challenges of Antarctic field work and the increasing availability of very high resolution commercial imagery have driven an interest in more efficient search and classification of remotely sensed imagery. This exploratory study employed geographic object-based analysis (GEOBIA) methods to classify guano stains, indicative of chinstrap and Adélie penguin breeding areas, from very high spatial resolution (VHSR) satellite imagery and closely examined the transferability of knowle...

  2. Remote diagnosis via a telecommunication satellite--ultrasonic tomographic image transmission experiments.

    Science.gov (United States)

    Nakajima, I; Inokuchi, S; Tajima, T; Takahashi, T

    1985-04-01

    An experiment to transmit ultrasonic tomographic section images required for remote medical diagnosis and care was conducted using the mobile telecommunication satellite OSCAR-10. The images received showed the intestinal condition of a patient incapable of verbal communication, however the image screen had a fairly coarse particle structure. On the basis of these experiments, were considered as the transmission of ultrasonic tomographic images extremely effective in remote diagnosis.

  3. An Improved Image Encryption Algorithm Based on Cyclic Rotations and Multiple Chaotic Sequences: Application to Satellite Images

    Directory of Open Access Journals (Sweden)

    MADANI Mohammed

    2017-10-01

    Full Text Available In this paper, a new satellite image encryption algorithm based on the combination of multiple chaotic systems and a random cyclic rotation technique is proposed. Our contribution consists in implementing three different chaotic maps (logistic, sine, and standard combined to improve the security of satellite images. Besides enhancing the encryption, the proposed algorithm also focuses on advanced efficiency of the ciphered images. Compared with classical encryption schemes based on multiple chaotic maps and the Rubik's cube rotation, our approach has not only the same merits of chaos systems like high sensitivity to initial values, unpredictability, and pseudo-randomness, but also other advantages like a higher number of permutations, better performances in Peak Signal to Noise Ratio (PSNR and a Maximum Deviation (MD.

  4. Classification and Recognition of Tomb Information in Hyperspectral Image

    Science.gov (United States)

    Gu, M.; Lyu, S.; Hou, M.; Ma, S.; Gao, Z.; Bai, S.; Zhou, P.

    2018-04-01

    There are a large number of materials with important historical information in ancient tombs. However, in many cases, these substances could become obscure and indistinguishable by human naked eye or true colour camera. In order to classify and identify materials in ancient tomb effectively, this paper applied hyperspectral imaging technology to archaeological research of ancient tomb in Shanxi province. Firstly, the feature bands including the main information at the bottom of the ancient tomb are selected by the Principal Component Analysis (PCA) transformation to realize the data dimension. Then, the image classification was performed using Support Vector Machine (SVM) based on feature bands. Finally, the material at the bottom of ancient tomb is identified by spectral analysis and spectral matching. The results show that SVM based on feature bands can not only ensure the classification accuracy, but also shorten the data processing time and improve the classification efficiency. In the material identification, it is found that the same matter identified in the visible light is actually two different substances. This research result provides a new reference and research idea for archaeological work.

  5. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    Benkirane, A.; Auger, G.; Chbihi, A.; Bloyet, D.; Plagnol, E.

    1994-01-01

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append

  6. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Benkirane, A; Auger, G; Chbihi, A [Grand Accelerateur National d` Ions Lourds (GANIL), 14 - Caen (France); Bloyet, D [Caen Univ., 14 (France); Plagnol, E [Paris-11 Univ., 91 - Orsay (France). Inst. de Physique Nucleaire

    1994-12-31

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ``classical`` automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append.

  7. Instantaneous Shoreline Extraction Utilizing Integrated Spectrum and Shadow Analysis From LiDAR Data and High-resolution Satellite Imagery

    Science.gov (United States)

    Lee, I.-Chieh

    Shoreline delineation and shoreline change detection are expensive processes in data source acquisition and manual shoreline delineation. These costs confine the frequency and interval of shoreline mapping periods. In this dissertation, a new shoreline delineation approach was developed targeting on lowering the data source cost and reducing human labor. To lower the cost of data sources, we used the public domain LiDAR data sets and satellite images to delineate shorelines without the requirement of data sets being acquired simultaneously, which is a new concept in this field. To reduce the labor cost, we made improvements in classifying LiDAR points and satellite images. Analyzing shadow relations with topography to improve the satellite image classification performance is also a brand-new concept. The extracted shoreline of the proposed approach could achieve an accuracy of 1.495 m RMSE, or 4.452m at the 95% confidence level. Consequently, the proposed approach could successfully lower the cost and shorten the processing time, in other words, to increase the shoreline mapping frequency with a reasonable accuracy. However, the extracted shoreline may not compete with the shoreline extracted by aerial photogrammetric procedures in the aspect of accuracy. Hence, this is a trade-off between cost and accuracy. This approach consists of three phases, first, a shoreline extraction procedure based mainly on LiDAR point cloud data with multispectral information from satellite images. Second, an object oriented shoreline extraction procedure to delineate shoreline solely from satellite images; in this case WorldView-2 images were used. Third, a shoreline integration procedure combining these two shorelines based on actual shoreline changes and physical terrain properties. The actual data source cost would only be from the acquisition of satellite images. On the other hand, only two processes needed human attention. First, the shoreline within harbor areas needed to be

  8. Morphological images analysis and chromosomic aberrations classification based on fuzzy logic

    International Nuclear Information System (INIS)

    Souza, Leonardo Peres

    2011-01-01

    This work has implemented a methodology for automation of images analysis of chromosomes of human cells irradiated at IEA-R1 nuclear reactor (located at IPEN, Sao Paulo, Brazil), and therefore subject to morphological aberrations. This methodology intends to be a tool for helping cytogeneticists on identification, characterization and classification of chromosomal metaphasic analysis. The methodology development has included the creation of a software application based on artificial intelligence techniques using Fuzzy Logic combined with image processing techniques. The developed application was named CHRIMAN and is composed of modules that contain the methodological steps which are important requirements in order to achieve an automated analysis. The first step is the standardization of the bi-dimensional digital image acquisition procedure through coupling a simple digital camera to the ocular of the conventional metaphasic analysis microscope. Second step is related to the image treatment achieved through digital filters application; storing and organization of information obtained both from image content itself, and from selected extracted features, for further use on pattern recognition algorithms. The third step consists on characterizing, counting and classification of stored digital images and extracted features information. The accuracy in the recognition of chromosome images is 93.9%. This classification is based on classical standards obtained at Buckton [1973], and enables support to geneticist on chromosomic analysis procedure, decreasing analysis time, and creating conditions to include this method on a broader evaluation system on human cell damage due to ionizing radiation exposure. (author)

  9. Multiview vector-valued manifold regularization for multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Xu, Chang; Xu, Chao; Liu, Hong; Wen, Yonggang

    2013-05-01

    In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV(3)MR) to integrate multiple features. MV(3)MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV(3)MR for image classification.

  10. TESTING OF LAND COVER CLASSIFICATION FROM MULTISPECTRAL AIRBORNE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    K. Bakuła

    2016-06-01

    Full Text Available Multispectral Airborne Laser Scanning provides a new opportunity for airborne data collection. It provides high-density topographic surveying and is also a useful tool for land cover mapping. Use of a minimum of three intensity images from a multiwavelength laser scanner and 3D information included in the digital surface model has the potential for land cover/use classification and a discussion about the application of this type of data in land cover/use mapping has recently begun. In the test study, three laser reflectance intensity images (orthogonalized point cloud acquired in green, near-infrared and short-wave infrared bands, together with a digital surface model, were used in land cover/use classification where six classes were distinguished: water, sand and gravel, concrete and asphalt, low vegetation, trees and buildings. In the tested methods, different approaches for classification were applied: spectral (based only on laser reflectance intensity images, spectral with elevation data as additional input data, and spectro-textural, using morphological granulometry as a method of texture analysis of both types of data: spectral images and the digital surface model. The method of generating the intensity raster was also tested in the experiment. Reference data were created based on visual interpretation of ALS data and traditional optical aerial and satellite images. The results have shown that multispectral ALS data are unlike typical multispectral optical images, and they have a major potential for land cover/use classification. An overall accuracy of classification over 90% was achieved. The fusion of multi-wavelength laser intensity images and elevation data, with the additional use of textural information derived from granulometric analysis of images, helped to improve the accuracy of classification significantly. The method of interpolation for the intensity raster was not very helpful, and using intensity rasters with both first and

  11. Segmentation and Classification of Burn Color Images

    Science.gov (United States)

    2001-10-25

    SEGMENTATION AND CLASSIFICATION OF BURN COLOR IMAGES Begoña Acha1, Carmen Serrano1, Laura Roa2 1Área de Teoría de la Señal y Comunicaciones ...2000, Las Vegas (USA), pp. 411-415. [21] G. Wyszecki and W.S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae (New

  12. Segmentation and Classification of Burn Color Images

    National Research Council Canada - National Science Library

    Acha, Begonya

    2001-01-01

    .... In the classification part, we take advantage of color information by clustering, with a vector quantization algorithm, the color centroids of small squares, taken from the burnt segmented part of the image, in the (V1, V2) plane into two possible groups, where V1 and V2 are the two chrominance components of the CIE Lab representation.

  13. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  14. An Investigation on Water Quality of Darlik Dam Drinking Water using Satellite Images

    Directory of Open Access Journals (Sweden)

    Erhan Alparslan

    2010-01-01

    Full Text Available Darlik Dam supplies 15% of the water demand of Istanbul Metropolitan City of Turkey. Water quality (WQ in the Darlik Dam was investigated from Landsat 5 TM satellite images of the years 2004, 2005, and 2006 in order to determine land use/land cover changes in the watershed of the dam that may deteriorate its WQ. The images were geometrically and atmospherically corrected for WQ analysis. Next, an investigation was made by multiple regression analysis between the unitless planetary reflectance values of the first four bands of the June 2005 Landsat TM image of the dam and WQ parameters, such as chlorophyll-a, total dissolved matter, turbidity, total phosphorous, and total nitrogen, measured at satellite image acquisition time at seven stations in the dam. Finally, WQ in the dam was studied from satellite images of the years 2004, 2005, and 2006 by pattern recognition techniques in order to determine possible water pollution in the dam. This study was compared to a previous study done by the authors in the Küçükçekmece water reservoir, also in Istanbul City.

  15. Thyroid Nodule Classification in Ultrasound Images by Fine-Tuning Deep Convolutional Neural Network.

    Science.gov (United States)

    Chi, Jianning; Walia, Ekta; Babyn, Paul; Wang, Jimmy; Groot, Gary; Eramian, Mark

    2017-08-01

    With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into "malignant" and "benign" cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.

  16. Automatic Registration Method for Fusion of ZY-1-02C Satellite Images

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2013-12-01

    Full Text Available Automatic image registration (AIR has been widely studied in the fields of medical imaging, computer vision, and remote sensing. In various cases, such as image fusion, high registration accuracy should be achieved to meet application requirements. For satellite images, the large image size and unstable positioning accuracy resulting from the limited manufacturing technology of charge-coupled device, focal plane distortion, and unrecorded spacecraft jitter lead to difficulty in obtaining agreeable corresponding points for registration using only area-based matching or feature-based matching. In this situation, a coarse-to-fine matching strategy integrating two types of algorithms is proven feasible and effective. In this paper, an AIR method for application to the fusion of ZY-1-02C satellite imagery is proposed. First, the images are geometrically corrected. Coarse matching, based on scale invariant feature transform, is performed for the subsampled corrected images, and a rough global estimation is made with the matching results. Harris feature points are then extracted, and the coordinates of the corresponding points are calculated according to the global estimation results. Precise matching is conducted, based on normalized cross correlation and least squares matching. As complex image distortion cannot be precisely estimated, a local estimation using the structure of triangulated irregular network is applied to eliminate the false matches. Finally, image resampling is conducted, based on local affine transformation, to achieve high-precision registration. Experiments with ZY-1-02C datasets demonstrate that the accuracy of the proposed method meets the requirements of fusion application, and its efficiency is also suitable for the commercial operation of the automatic satellite data process system.

  17. Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series

    Science.gov (United States)

    Champion, Nicolas

    2016-06-01

    Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pl

  18. AUTOMATIC DETECTION OF CLOUDS AND SHADOWS USING HIGH RESOLUTION SATELLITE IMAGE TIME SERIES

    Directory of Open Access Journals (Sweden)

    N. Champion

    2016-06-01

    Full Text Available Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8

  19. A study of earthquake-induced building detection by object oriented classification approach

    Science.gov (United States)

    Sabuncu, Asli; Damla Uca Avci, Zehra; Sunar, Filiz

    2017-04-01

    Among the natural hazards, earthquakes are the most destructive disasters and cause huge loss of lives, heavily infrastructure damages and great financial losses every year all around the world. According to the statistics about the earthquakes, more than a million earthquakes occur which is equal to two earthquakes per minute in the world. Natural disasters have brought more than 780.000 deaths approximately % 60 of all mortality is due to the earthquakes after 2001. A great earthquake took place at 38.75 N 43.36 E in the eastern part of Turkey in Van Province on On October 23th, 2011. 604 people died and about 4000 buildings seriously damaged and collapsed after this earthquake. In recent years, the use of object oriented classification approach based on different object features, such as spectral, textural, shape and spatial information, has gained importance and became widespread for the classification of high-resolution satellite images and orthophotos. The motivation of this study is to detect the collapsed buildings and debris areas after the earthquake by using very high-resolution satellite images and orthophotos with the object oriented classification and also see how well remote sensing technology was carried out in determining the collapsed buildings. In this study, two different land surfaces were selected as homogenous and heterogeneous case study areas. In the first step of application, multi-resolution segmentation was applied and optimum parameters were selected to obtain the objects in each area after testing different color/shape and compactness/smoothness values. In the next step, two different classification approaches, namely "supervised" and "unsupervised" approaches were applied and their classification performances were compared. Object-based Image Analysis (OBIA) was performed using e-Cognition software.

  20. Exploitation of geospatial techniques for monitoring metropolitan population growth and classification of landcover features

    International Nuclear Information System (INIS)

    Almas, A.S.; Rahim, C.A.

    2006-01-01

    The present research relates to the exploitation of Remote Sensing and GIS techniques for studying the metropolitan expansion and land use/ landcover classification of Lahore, the second largest city of Pakistan where urbanization is taking place at a striking rate with inadequate development of the requisite infrastructure. Such sprawl gives rise to the congestion, pollution and commuting time issues. The metropolitan expansion, based on growth direction and distance from the city centre, was observed for a period of about thirty years. The classification of the complex spatial assemblage of urban environment and its expanding precincts was done using the temporally spaced satellite images geo-referenced to a common coordinate system and census data. Spatial categorization of urban landscape involving densely populated residential areas, sparsely inhibited regions, bare soil patches, water bodies, vegetation, Parks, and mixed features was done with the help of satellite images. Resultantly, remote sensing and GIS techniques were found very efficient and effective for studying the metropolitan growth patterns along with the classification of urban features into prominent categories. In addition, census data augments the usefulness of spatial techniques for carrying out such studies. (author)

  1. Classification of time-series images using deep convolutional neural networks

    Science.gov (United States)

    Hatami, Nima; Gavet, Yann; Debayle, Johan

    2018-04-01

    Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.

  2. Important Value of Economic Potency Mangrove Using NDVI Satellite High Resolution Image To Support Eco Tourism Of Pamurbaya Area (Case Study: East Cost of Surabaya)

    Science.gov (United States)

    Sukojo, B. M.; Hidayat, H.; Ratnasari, D.

    2017-12-01

    Indonesia is a vast maritime country; many mangrove conservations is found around coastal areas of Indonesia. Mangroves support the life of a large number of animal species by providing breeding, spawning and feeding. Mangrove forests as one of the unique ecosystems are potential natural resources, supporting the diversity of flora and fauna of terrestrial aquatic communities that directly or indirectly play an important role for human life in economic, social and environmental terms. East Coast Surabaya is an area with the most extensive and diverse mangrove ecosystems along the coast of Surabaya. Currently Pamurbaya used as a recreational object or nature tourism called eco tours. Utilization of mangrove ecosystem as a place of this eco tour bring positive impact on economic potency around pamurbaya area. So, to know the value of the economic potential of mangrove ecosystems for support of nature tourism Pamurbaya region needs to study mapping mangrove ecosystem conditions in the East Coast area of Surabaya. Mapping of mangrove conditions can use remote sensing technology by utilizing satellite image data with high resolution. Data used for mapping mangrove ecosystem conditions on the east coast of Surabaya are high resolution satellite image data of Pleiades 1A and field observation data such as Ground Control Point, soil spectral parameters and water quality. From satellite image data will be classification of mangrove vegetation canopy classification using NDVI vegetation index method using algorithm formula which then will be tested correlation with field observation data on reflectant value of field and water quality parameter. The purpose of this research is to know the condition of mangrove ecosystem to know the economic potential of mangrove ecosystem in supporting Pamurbaya nature tourism. The expected result of this research is the existence of basic geospatial information in the form of mangrove ecosystem condition map. So that can be used as decision

  3. Global Solar Radiation in Spain from Satellite Images

    International Nuclear Information System (INIS)

    Ramirez, L.; Mora, L.; Sidrach de Cardona, M.; Navarro, A. A.; Varela, M.; Cruz, M. de la

    2003-01-01

    In the context of the present work a series of algorithms of calculation of the solar radiation from satellite images has been developed. These models, have been applied to three years of images of the Meteosat satellite and the results of the treatment have been extrapolated to long term. For the development of the models of solar radiation registered in ground stations have been used, corresponding all of them to localities of peninsular Spain and the Balearic ones. The maximum periods of data available have been used, supposing in most of the cases periods of between 6 and 9 years. From the results has a year type of images of global solar radiation on horizontal surface. The original resolution of the image of 7x7 km in the study latitudes, has been reevaluated to 5x5 km. This supposes to have a value of the typical radiation for every day of the year, each 5x5 km in the study territory. This information, supposes an important advance as far as the knowledge of the space distribution of the radiation solar, impossible to reach about alternative methods. Doubtlessly, the precision of the provided values is not comparable with pyrano metric measures in a concrete locality, but it provides a very valid indicator in places in which it is not had previous information. In addition to the radiation maps, tables of the global solar radiation have been prepared on different inclinations, from the global radiation on horizontal surface calculated for every day of the year and in each pixel of the image. (Author) 24 refs

  4. An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation

    Science.gov (United States)

    Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.

    2015-01-01

    Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.

  5. AUTOMATED CONSTRUCTION OF COVERAGE CATALOGUES OF ASTER SATELLITE IMAGE FOR URBAN AREAS OF THE WORLD

    Directory of Open Access Journals (Sweden)

    H. Miyazaki

    2012-07-01

    Full Text Available We developed an algorithm to determine a combination of satellite images according to observation extent and image quality. The algorithm was for testing necessity for completing coverage of the search extent. The tests excluded unnecessary images with low quality and preserve necessary images with good quality. The search conditions of the satellite images could be extended, indicating the catalogue could be constructed with specified periods required for time series analysis. We applied the method to a database of metadata of ASTER satellite images archived in GEO Grid of National Institute of Advanced Industrial Science and Technology (AIST, Japan. As indexes of populated places with geographical coordinates, we used a database of 3372 populated place of more than 0.1 million populations retrieved from GRUMP Settlement Points, a global gazetteer of cities, which has geographical names of populated places associated with geographical coordinates and population data. From the coordinates of populated places, 3372 extents were generated with radiuses of 30 km, a half of swath of ASTER satellite images. By merging extents overlapping each other, they were assembled into 2214 extents. As a result, we acquired combinations of good quality for 1244 extents, those of low quality for 96 extents, incomplete combinations for 611 extents. Further improvements would be expected by introducing pixel-based cloud assessment and pixel value correction over seasonal variations.

  6. Feature Extraction and Classification on Esophageal X-Ray Images of Xinjiang Kazak Nationality

    Directory of Open Access Journals (Sweden)

    Fang Yang

    2017-01-01

    Full Text Available Esophageal cancer is one of the fastest rising types of cancers in China. The Kazak nationality is the highest-risk group in Xinjiang. In this work, an effective computer-aided diagnostic system is developed to assist physicians in interpreting digital X-ray image features and improving the quality of diagnosis. The modules of the proposed system include image preprocessing, feature extraction, feature selection, image classification, and performance evaluation. 300 original esophageal X-ray images were resized to a region of interest and then enhanced by the median filter and histogram equalization method. 37 features from textural, frequency, and complexity domains were extracted. Both sequential forward selection and principal component analysis methods were employed to select the discriminative features for classification. Then, support vector machine and K-nearest neighbors were applied to classify the esophageal cancer images with respect to their specific types. The classification performance was evaluated in terms of the area under the receiver operating characteristic curve, accuracy, precision, and recall, respectively. Experimental results show that the classification performance of the proposed system outperforms the conventional visual inspection approaches in terms of diagnostic quality and processing time. Therefore, the proposed computer-aided diagnostic system is promising for the diagnostics of esophageal cancer.

  7. Polarimetric SAR Image Classification Using Multiple-feature Fusion and Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Sun Xun

    2016-12-01

    Full Text Available In this paper, we propose a supervised classification algorithm for Polarimetric Synthetic Aperture Radar (PolSAR images using multiple-feature fusion and ensemble learning. First, we extract different polarimetric features, including extended polarimetric feature space, Hoekman, Huynen, H/alpha/A, and fourcomponent scattering features of PolSAR images. Next, we randomly select two types of features each time from all feature sets to guarantee the reliability and diversity of later ensembles and use a support vector machine as the basic classifier for predicting classification results. Finally, we concatenate all prediction probabilities of basic classifiers as the final feature representation and employ the random forest method to obtain final classification results. Experimental results at the pixel and region levels show the effectiveness of the proposed algorithm.

  8. Magnetic resonance imaging texture analysis classification of primary breast cancer

    International Nuclear Information System (INIS)

    Waugh, S.A.; Lerski, R.A.; Purdie, C.A.; Jordan, L.B.; Vinnicombe, S.; Martin, P.; Thompson, A.M.

    2016-01-01

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  9. Magnetic resonance imaging texture analysis classification of primary breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Waugh, S.A.; Lerski, R.A. [Ninewells Hospital and Medical School, Department of Medical Physics, Dundee (United Kingdom); Purdie, C.A.; Jordan, L.B. [Ninewells Hospital and Medical School, Department of Pathology, Dundee (United Kingdom); Vinnicombe, S. [University of Dundee, Division of Imaging and Technology, Ninewells Hospital and Medical School, Dundee (United Kingdom); Martin, P. [Ninewells Hospital and Medical School, Department of Clinical Radiology, Dundee (United Kingdom); Thompson, A.M. [University of Texas MD Anderson Cancer Center, Department of Surgical Oncology, Houston, TX (United States)

    2016-02-15

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  10. High resolution mapping of urban areas using SPOT-5 images and ancillary data

    Directory of Open Access Journals (Sweden)

    Elif Sertel

    2015-08-01

    Full Text Available This research aims to propose new rule sets to be used for object based classification of SPOT-5 images to accurately create detailed urban land cover/use maps. In addition to SPOT-5 satellite images, Normalized Difference Vegetation Index (NDVI and Normalized Difference Water Index (NDWI maps, cadastral maps, Openstreet maps, road maps and Land Cover maps, were also integrated into classification to increase the accuracy of resulting maps. Gaziantep city, one of the highly populated cities of Turkey with different landscape patterns was selected as the study area. Different rule sets involving spectral, spatial and geometric characteristics were developed to be used for object based classification of 2.5 m resolution Spot-5 satellite images to automatically create urban map of the region. Twenty different land cover/use classes obtained from European Urban Atlas project were applied and an automatic classification approach was suggested for high resolution urban map creation and updating. Integration of different types of data into the classification decision tree increased the performance and accuracy of the suggested approach. The accuracy assessment results illustrated that with the usage of newly proposed rule set algorithms in object-based classification, urban areas represented with seventeen different sub-classes could be mapped with 94 % or higher overall accuracy.

  11. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    Science.gov (United States)

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  12. Non supervised classification of vegetable covers on digital images of remote sensors: Landsat - ETM+

    International Nuclear Information System (INIS)

    Arango Gutierrez, Mauricio; Branch Bedoya, John William; Botero Fernandez, Veronica

    2005-01-01

    The plant species diversity in Colombia and the lack of inventory of them suggests the need for a process that facilitates the work of investigators in these disciplines. Remote satellite sensors such as landsat ETM+ and non-supervised artificial intelligence techniques, such as self-organizing maps - SOM, could provide viable alternatives for advancing in the rapid obtaining of information related to zones with different vegetative covers in the national geography. The zone proposed for the study case was classified in a supervised form by the method of maximum likelihood by another investigation in forest sciences and eight types of vegetative covers were discriminated. This information served as a base line to evaluate the performance of the non-supervised sort keys isodata and SOM. However, the information that the images provided had to first be purified according to the criteria of use and data quality, so that adequate information for these non-supervised methods were used. For this, several concepts were used; such as, image statistics, spectral behavior of the vegetative communities, sensor characteristics and the average divergence that allowed to define the best bands and their combinations. Principal component analysis was applied to these to reduce to the number of data while conserving a large percentage of the information. The non-supervised techniques were applied to these purified data, modifying some parameters that could yield a better convergence of the methods. The results obtained were compared with the supervised classification via confusion matrices and it was concluded that there was not a good convergence of non-supervised classification methods with this process for the case of vegetative covers

  13. Research on Coal Exploration Technology Based on Satellite Remote Sensing

    Directory of Open Access Journals (Sweden)

    Dong Xiao

    2016-01-01

    Full Text Available Coal is the main source of energy. In China and Vietnam, coal resources are very rich, but the exploration level is relatively low. This is mainly caused by the complicated geological structure, the low efficiency, the related damage, and other bad situations. To this end, we need to make use of some advanced technologies to guarantee the resource exploration is implemented smoothly and orderly. Numerous studies show that remote sensing technology is an effective way in coal exploration and measurement. In this paper, we try to measure the distribution and reserves of open-air coal area through satellite imagery. The satellite picture of open-air coal mining region in Quang Ninh Province of Vietnam was collected as the experimental data. Firstly, the ENVI software is used to eliminate satellite imagery spectral interference. Then, the image classification model is established by the improved ELM algorithm. Finally, the effectiveness of the improved ELM algorithm is verified by using MATLAB simulations. The results show that the accuracies of the testing set reach 96.5%. And it reaches 83% of the image discernment precision compared with the same image from Google.

  14. New and Emerging Satellite Imaging Capabilities in Support of Safeguards

    International Nuclear Information System (INIS)

    Johnson, M.; Paquette, J.P.; Spyropoulos, N.; Rainville, L.; Schichor, P.; Hong, M.

    2015-01-01

    This abstract is focused on new and emerging commercial satellite imagery (CSI) capabilities. For more than a decade, experienced imagery analysts have been exploiting and analyzing CSI in support of the Department of Safeguards. As the remote sensing industry continues to evolve, additional CSI imagery types are becoming available that could enhance our ability to evaluate and verify States' declarations and to investigate the possible presence of undeclared activities. A newly available and promising CSI capability that may have a Safeguards application is Full Motion Video (FMV) imagery collection from satellites. For quite some time, FMV imagery has been collected from airborne platforms, but now FMV sensors are being deployed into space. Like its airborne counterpart, satellite FMV imagery could provide analysts with a great deal of information, including insight into the operational status of facilities and patterns of activity. From a Safeguards perspective, FMV imagery could help the Agency in the evaluation and verification of States' declared facilities and activities. There are advantages of FMV imaging capabilities that cannot be duplicated with other CSI capabilities, including the ability to loiter over areas of interest and the potential to revisit sites multiple times per day. Additional sensor capabilities applicable to the Safeguards mission include, but are not limited to, the following sensors: · Thermal Infrared imaging sensors will be launched in late 2014 to monitor operational status, e.g., heat from a transformer. · High resolution ShortWave Infrared sensors able to characterize materials that could support verification of Additional Protocol declarations under Article 2.a(v). · Unmanned Aerial Vehicles with individual sensors or specific sensor combinations. The Safeguards Symposium provides a forum to showcase and demonstrate safeguards applications for these emerging satellite imaging capabilities. (author)

  15. Remote sensing mapping of macroalgal farms by modifying thresholds in the classification tree

    KAUST Repository

    Zheng, Yuhan

    2018-05-07

    Remote sensing is the main approach used to classify and map aquatic vegetation, and classification tree (CT) analysis is superior to various classification methods. Based on previous studies, modified CT can be developed from traditional CT by adjusting the thresholds based on the statistical relationship between spectral features to classify different images without ground-truth data. However, no studies have yet employed this method to resolve marine vegetation. In this study, three Gao-Fen 1 satellite images obtained with the same sensor on January 30, 2014, November 5, 2014, and January 21, 2015 were selected, and two features were then employed to extract macroalgae from aquaculture farms from the seawater background. Besides, object-based classification and other image analysis methods were adopted to improve the classification accuracy in this study. Results show that the overall accuracies of traditional CTs for three images are 92.0%, 94.2% and 93.9%, respectively, whereas the overall accuracies of the two corresponding modified CTs for images obtained on January 21, 2015 and November 5, 2014 are 93.1% and 89.5%, respectively. This indicates modified CTs can help map macroalgae with multi-date imagery and monitor the spatiotemporal distribution of macroalgae in coastal environments.

  16. Remote sensing mapping of macroalgal farms by modifying thresholds in the classification tree

    KAUST Repository

    Zheng, Yuhan; Duarte, Carlos M.; Chen, Jiang; Li, Dan; Lou, Zhaohan; Wu, Jiaping

    2018-01-01

    Remote sensing is the main approach used to classify and map aquatic vegetation, and classification tree (CT) analysis is superior to various classification methods. Based on previous studies, modified CT can be developed from traditional CT by adjusting the thresholds based on the statistical relationship between spectral features to classify different images without ground-truth data. However, no studies have yet employed this method to resolve marine vegetation. In this study, three Gao-Fen 1 satellite images obtained with the same sensor on January 30, 2014, November 5, 2014, and January 21, 2015 were selected, and two features were then employed to extract macroalgae from aquaculture farms from the seawater background. Besides, object-based classification and other image analysis methods were adopted to improve the classification accuracy in this study. Results show that the overall accuracies of traditional CTs for three images are 92.0%, 94.2% and 93.9%, respectively, whereas the overall accuracies of the two corresponding modified CTs for images obtained on January 21, 2015 and November 5, 2014 are 93.1% and 89.5%, respectively. This indicates modified CTs can help map macroalgae with multi-date imagery and monitor the spatiotemporal distribution of macroalgae in coastal environments.

  17. A comparison of autonomous techniques for multispectral image analysis and classification

    Science.gov (United States)

    Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso

    2012-10-01

    Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.

  18. Data fusion of Landsat TM and IRS images in forest classification

    Science.gov (United States)

    Guangxing Wang; Markus Holopainen; Eero Lukkarinen

    2000-01-01

    Data fusion of Landsat TM images and Indian Remote Sensing satellite panchromatic image (IRS-1C PAN) was studied and compared to the use of TM or IRS image only. The aim was to combine the high spatial resolution of IRS-1C PAN to the high spectral resolution of Landsat TM images using a data fusion algorithm. The ground truth of the study was based on a sample of 1,020...

  19. Roads Data Conflation Using Update High Resolution Satellite Images

    Science.gov (United States)

    Abdollahi, A.; Riyahi Bakhtiari, H. R.

    2017-11-01

    Urbanization, industrialization and modernization are rapidly growing in developing countries. New industrial cities, with all the problems brought on by rapid population growth, need infrastructure to support the growth. This has led to the expansion and development of the road network. A great deal of road network data has made by using traditional methods in the past years. Over time, a large amount of descriptive information has assigned to these map data, but their geometric accuracy and precision is not appropriate to today's need. In this regard, the improvement of the geometric accuracy of road network data by preserving the descriptive data attributed to them and updating of the existing geo databases is necessary. Due to the size and extent of the country, updating the road network maps using traditional methods is time consuming and costly. Conversely, using remote sensing technology and geographic information systems can reduce costs, save time and increase accuracy and speed. With increasing the availability of high resolution satellite imagery and geospatial datasets there is an urgent need to combine geographic information from overlapping sources to retain accurate data, minimize redundancy, and reconcile data conflicts. In this research, an innovative method for a vector-to-imagery conflation by integrating several image-based and vector-based algorithms presented. The SVM method for image classification and Level Set method used to extract the road the different types of road intersections extracted from imagery using morphological operators. For matching the extracted points and to find the corresponding points, matching function which uses the nearest neighborhood method was applied. Finally, after identifying the matching points rubber-sheeting method used to align two datasets. Two residual and RMSE criteria used to evaluate accuracy. The results demonstrated excellent performance. The average root-mean-square error decreased from 11.8 to 4.1 m.

  20. Prospects of application of survey satellite image for meteorology

    Science.gov (United States)

    Kapochkina, A. B.; Kapochkin, B. B.; Kucherenko, N. V.

    The maximal interest is represented with the information from geostationary satellites. These satellites repeat shootings the chosen territories, allowing to study dynamics of images. Most interesting shootings in IR a range. Studying of survey image is applied to studying linear elements of clouds (LEC). It is established, that "LEC " arise only above breaks of an earth's crust. In research results of the complex analysis of the satellite data, hydrometeorological supervision, seismicity, supervision over deformations of a surface of the Earth are used. It is established that before formation "LEC " in a ground layer arise anomalies of temperature and humidity. The situation above Europe 16 May, 2001 is considered. "LEC " in Europe block carry of air weights from the west to the east. Synoptic conditions above the Great Britain July, 7-10, 2000 is considered. Moving "LEC" trace distribution of deformation waves to an earth's crust. Satellite shootings Europe before earthquake in Greece 14.08.2003 are considered. These days ground supervision were conducted and the data of the geostationary satellite were analyzed. During moving "LEC " occur failures (destruction houses & of gas mains), earthquake. The situation above Iberian peninsula 12-16.11.2001 is considered. "LEC" arose before flooding in Europe. The situation before flooding in Germany June, 6-8, 2002 and flooding on the river Kuban June, 16-23, 2002 is considered. In case of occurrence of tectonic compression of an earth's crust there are "LEC ", tracer intensive movements of air upwards and downwards above negative and positive anomalies of the form of a terrestrial surface, accordingly. Such meteorological situations are dangerous to flights of aircraft, the fast gravitational anomalies influencing into orbits of movement of satellites trace. The situation above equatorial Atlantic 26.03.2003 years is considered. At tectonic compression of continental scale overcast covers the whole continents for more

  1. SVM Pixel Classification on Colour Image Segmentation

    Science.gov (United States)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  2. Satellite image time series simulation for environmental monitoring

    Science.gov (United States)

    Guo, Tao

    2014-11-01

    The performance of environmental monitoring heavily depends on the availability of consecutive observation data and it turns out an increasing demand in remote sensing community for satellite image data in the sufficient resolution with respect to both spatial and temporal requirements, which appear to be conflictive and hard to tune tradeoffs. Multiple constellations could be a solution if without concerning cost, and thus it is so far interesting but very challenging to develop a method which can simultaneously improve both spatial and temporal details. There are some research efforts to deal with the problem from various aspects, a type of approaches is to enhance the spatial resolution using techniques of super resolution, pan-sharpen etc. which can produce good visual effects, but mostly cannot preserve spectral signatures and result in losing analytical value. Another type is to fill temporal frequency gaps by adopting time interpolation, which actually doesn't increase informative context at all. In this paper we presented a novel method to generate satellite images in higher spatial and temporal details, which further enables satellite image time series simulation. Our method starts with a pair of high-low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and the temporal change is then projected onto high resolution data plane and assigned to each high resolution pixel referring the predefined temporal change patterns of each type of ground objects to generate a simulated high resolution data. A preliminary experiment shows that our method can simulate a high resolution data with a good accuracy. We consider the contribution of our method is to enable timely monitoring of temporal changes through analysis of low resolution images time series only, and usage of

  3. CLASSIFICATION AND RECOGNITION OF TOMB INFORMATION IN HYPERSPECTRAL IMAGE

    Directory of Open Access Journals (Sweden)

    M. Gu

    2018-04-01

    Full Text Available There are a large number of materials with important historical information in ancient tombs. However, in many cases, these substances could become obscure and indistinguishable by human naked eye or true colour camera. In order to classify and identify materials in ancient tomb effectively, this paper applied hyperspectral imaging technology to archaeological research of ancient tomb in Shanxi province. Firstly, the feature bands including the main information at the bottom of the ancient tomb are selected by the Principal Component Analysis (PCA transformation to realize the data dimension. Then, the image classification was performed using Support Vector Machine (SVM based on feature bands. Finally, the material at the bottom of ancient tomb is identified by spectral analysis and spectral matching. The results show that SVM based on feature bands can not only ensure the classification accuracy, but also shorten the data processing time and improve the classification efficiency. In the material identification, it is found that the same matter identified in the visible light is actually two different substances. This research result provides a new reference and research idea for archaeological work.

  4. Effects of Per-Pixel Variability on Uncertainties in Bathymetric Retrievals from High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Elizabeth J. Botha

    2016-05-01

    Full Text Available Increased sophistication of high spatial resolution multispectral satellite sensors provides enhanced bathymetric mapping capability. However, the enhancements are counter-acted by per-pixel variability in sunglint, atmospheric path length and directional effects. This case-study highlights retrieval errors from images acquired at non-optimal geometrical combinations. The effects of variations in the environmental noise on water surface reflectance and the accuracy of environmental variable retrievals were quantified. Two WorldView-2 satellite images were acquired, within one minute of each other, with Image 1 placed in a near-optimal sun-sensor geometric configuration and Image 2 placed close to the specular point of the Bidirectional Reflectance Distribution Function (BRDF. Image 2 had higher total environmental noise due to increased surface glint and higher atmospheric path-scattering. Generally, depths were under-estimated from Image 2, compared to Image 1. A partial improvement in retrieval error after glint correction of Image 2 resulted in an increase of the maximum depth to which accurate depth estimations were returned. This case-study indicates that critical analysis of individual images, accounting for the entire sun elevation and azimuth and satellite sensor pointing and geometry as well as anticipated wave height and direction, is required to ensure an image is fit for purpose for aquatic data analysis.

  5. Effects on MR images compression in tissue classification quality

    International Nuclear Information System (INIS)

    Santalla, H; Meschino, G; Ballarin, V

    2007-01-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of 'quality' is essential. What we understand for 'quality'? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images

  6. Moving object detection in video satellite image based on deep learning

    Science.gov (United States)

    Zhang, Xueyang; Xiang, Junhua

    2017-11-01

    Moving object detection in video satellite image is studied. A detection algorithm based on deep learning is proposed. The small scale characteristics of remote sensing video objects are analyzed. Firstly, background subtraction algorithm of adaptive Gauss mixture model is used to generate region proposals. Then the objects in region proposals are classified via the deep convolutional neural network. Thus moving objects of interest are detected combined with prior information of sub-satellite point. The deep convolution neural network employs a 21-layer residual convolutional neural network, and trains the network parameters by transfer learning. Experimental results about video from Tiantuo-2 satellite demonstrate the effectiveness of the algorithm.

  7. Mid-level image representations for real-time heart view plane classification of echocardiograms.

    Science.gov (United States)

    Penatti, Otávio A B; Werneck, Rafael de O; de Almeida, Waldir R; Stein, Bernardo V; Pazinato, Daniel V; Mendes Júnior, Pedro R; Torres, Ricardo da S; Rocha, Anderson

    2015-11-01

    In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Fuzzy C-means classification for corrosion evolution of steel images

    Science.gov (United States)

    Trujillo, Maite; Sadki, Mustapha

    2004-05-01

    An unavoidable problem of metal structures is their exposure to rust degradation during their operational life. Thus, the surfaces need to be assessed in order to avoid potential catastrophes. There is considerable interest in the use of patch repair strategies which minimize the project costs. However, to operate such strategies with confidence in the long useful life of the repair, it is essential that the condition of the existing coatings and the steel substrate can be accurately quantified and classified. This paper describes the application of fuzzy set theory for steel surfaces classification according to the steel rust time. We propose a semi-automatic technique to obtain image clustering using the Fuzzy C-means (FCM) algorithm and we analyze two kinds of data to study the classification performance. Firstly, we investigate the use of raw images" pixels without any pre-processing methods and neighborhood pixels. Secondly, we apply Gaussian noise to the images with different standard deviation to study the FCM method tolerance to Gaussian noise. The noisy images simulate the possible perturbations of the images due to the weather or rust deposits in the steel surfaces during typical on-site acquisition procedures

  9. Agricultural crop mapping and classification by Landsat images to evaluate water use in the Lake Urmia basin, North-west Iran

    Science.gov (United States)

    Fazel, Nasim; Norouzi, Hamid; Madani, Kaveh; Kløve, Bjørn

    2016-04-01

    Lake Urmia, once one of the largest hypersaline lakes in the world has lost more than 90% of its surface body mainly due to the intensive expansion of agriculture, using more than 90% of all water in the region. Access to accurate and up-to-date information on the extent and distribution of individual crop types, associated with land use changes and practices, has significant value in intensively agricultural regions. Explicit information of croplands can be useful for sustainable water resources, land and agriculture planning and management. Remote sensing, has been proven to be a more cost-effective alternative to the traditional statistically-based ground surveys for crop coverage areas that are costly and provide insufficient information. Satellite images along with ground surveys can provide the necessary information of spatial coverage and spectral responses of croplands for sustainable agricultural management. This study strives to differentiate different crop types and agricultural practices to achieve a higher detailed crop map of the Lake Urmia basin. The mapping approach consists of a two-stage supervised classification of multi-temporal multi-spectral high resolution images obtained from Landsat imagery archive. Irrigated and non-irrigated croplands and orchards were separated from other major land covers (urban, ranges, bare-lands, and water) in the region by means of maximum Likelihood supervised classification method. The field data collected during 2015 and land use maps generated in 2007 and Google Earth comparisons were used to form a training data set to perform the supervised classification. In the second stage, non-agricultural lands were masked and the supervised classification was applied on the Landsat images stack to identify seven major croplands in the region (wheat and barley, beetroot, corn, sunflower, alfalfa, vineyards, and apple orchards). The obtained results can be of significant value to the Urmia Lake restoration efforts which

  10. SHADOW DETECTION FROM VERY HIGH RESOLUTON SATELLITE IMAGE USING GRABCUT SEGMENTATION AND RATIO-BAND ALGORITHMS

    Directory of Open Access Journals (Sweden)

    N. M. S. M. Kadhim

    2015-03-01

    Full Text Available Very-High-Resolution (VHR satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour, the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates

  11. Shadow Detection from Very High Resoluton Satellite Image Using Grabcut Segmentation and Ratio-Band Algorithms

    Science.gov (United States)

    Kadhim, N. M. S. M.; Mourshed, M.; Bray, M. T.

    2015-03-01

    Very-High-Resolution (VHR) satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour), the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates significant performance of

  12. ASSESSMENT OF LANDSCAPE CHARACTERISTICS ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    Science.gov (United States)

    Landscape characteristics such as small patch size and land cover heterogeneity have been hypothesized to increase the likelihood of misclassifying pixels during thematic image classification. However, there has been a lack of empirical evidence, to support these hypotheses. This...

  13. IoSiS: a radar system for imaging of satellites in space

    Science.gov (United States)

    Jirousek, M.; Anger, S.; Dill, S.; Schreiber, E.; Peichl, M.

    2017-05-01

    Space debris nowadays is one of the main threats for satellite systems especially in low earth orbit (LEO). More than 700,000 debris objects with potential to destroy or damage a satellite are estimated. The effects of an impact often are not identifiable directly from ground. High-resolution radar images are helpful in analyzing a possible damage. Therefor DLR is currently developing a radar system called IoSiS (Imaging of Satellites in Space), being based on an existing steering antenna structure and our multi-purpose high-performance radar system GigaRad for experimental investigations. GigaRad is a multi-channel system operating at X band and using a bandwidth of up to 4.4 GHz in the IoSiS configuration, providing fully separated transmit (TX) and receive (RX) channels, and separated antennas. For the observation of small satellites or space debris a highpower traveling-wave-tube amplifier (TWTA) is mounted close to the TX antenna feed. For the experimental phase IoSiS uses a 9 m TX and a 1 m RX antenna mounted on a common steerable positioner. High-resolution radar images are obtained by using Inverse Synthetic Aperture Radar (ISAR) techniques. The guided tracking of known objects during overpass allows here wide azimuth observation angles. Thus high azimuth resolution comparable to the range resolution can be achieved. This paper outlines technical main characteristics of the IoSiS radar system including the basic setup of the antenna, the radar instrument with the RF error correction, and the measurement strategy. Also a short description about a simulation tool for the whole instrument and expected images is shown.

  14. Multi-material classification of dry recyclables from municipal solid waste based on thermal imaging.

    Science.gov (United States)

    Gundupalli, Sathish Paulraj; Hait, Subrata; Thakur, Atul

    2017-12-01

    There has been a significant rise in municipal solid waste (MSW) generation in the last few decades due to rapid urbanization and industrialization. Due to the lack of source segregation practice, a need for automated segregation of recyclables from MSW exists in the developing countries. This paper reports a thermal imaging based system for classifying useful recyclables from simulated MSW sample. Experimental results have demonstrated the possibility to use thermal imaging technique for classification and a robotic system for sorting of recyclables in a single process step. The reported classification system yields an accuracy in the range of 85-96% and is comparable with the existing single-material recyclable classification techniques. We believe that the reported thermal imaging based system can emerge as a viable and inexpensive large-scale classification-cum-sorting technology in recycling plants for processing MSW in developing countries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Astrophysical Information from Objective Prism Digitized Images: Classification with an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Bratsolis Emmanuel

    2005-01-01

    Full Text Available Stellar spectral classification is not only a tool for labeling individual stars but is also useful in studies of stellar population synthesis. Extracting the physical quantities from the digitized spectral plates involves three main stages: detection, extraction, and classification of spectra. Low-dispersion objective prism images have been used and automated methods have been developed. The detection and extraction problems have been presented in previous works. In this paper, we present a classification method based on an artificial neural network (ANN. We make a brief presentation of the entire automated system and we compare the new classification method with the previously used method of maximum correlation coefficient (MCC. Digitized photographic material has been used here. The method can also be used on CCD spectral images.

  16. Histological image classification using biologically interpretable shape-based features

    International Nuclear Information System (INIS)

    Kothari, Sonal; Phan, John H; Young, Andrew N; Wang, May D

    2013-01-01

    Automatic cancer diagnostic systems based on histological image classification are important for improving therapeutic decisions. Previous studies propose textural and morphological features for such systems. These features capture patterns in histological images that are useful for both cancer grading and subtyping. However, because many of these features lack a clear biological interpretation, pathologists may be reluctant to adopt these features for clinical diagnosis. We examine the utility of biologically interpretable shape-based features for classification of histological renal tumor images. Using Fourier shape descriptors, we extract shape-based features that capture the distribution of stain-enhanced cellular and tissue structures in each image and evaluate these features using a multi-class prediction model. We compare the predictive performance of the shape-based diagnostic model to that of traditional models, i.e., using textural, morphological and topological features. The shape-based model, with an average accuracy of 77%, outperforms or complements traditional models. We identify the most informative shapes for each renal tumor subtype from the top-selected features. Results suggest that these shapes are not only accurate diagnostic features, but also correlate with known biological characteristics of renal tumors. Shape-based analysis of histological renal tumor images accurately classifies disease subtypes and reveals biologically insightful discriminatory features. This method for shape-based analysis can be extended to other histological datasets to aid pathologists in diagnostic and therapeutic decisions

  17. Quantitative analysis and classification of AFM images of human hair.

    Science.gov (United States)

    Gurden, S P; Monteiro, V F; Longo, E; Ferreira, M M C

    2004-07-01

    The surface topography of human hair, as defined by the outer layer of cellular sheets, termed cuticles, largely determines the cosmetic properties of the hair. The condition of the cuticles is of great cosmetic importance, but also has the potential to aid diagnosis in the medical and forensic sciences. Atomic force microscopy (AFM) has been demonstrated to offer unique advantages for analysis of the hair surface, mainly due to the high image resolution and the ease of sample preparation. This article presents an algorithm for the automatic analysis of AFM images of human hair. The cuticular structure is characterized using a series of descriptors, such as step height, tilt angle and cuticle density, allowing quantitative analysis and comparison of different images. The usefulness of this approach is demonstrated by a classification study. Thirty-eight AFM images were measured, consisting of hair samples from (a) untreated and bleached hair samples, and (b) the root and distal ends of the hair fibre. The multivariate classification technique partial least squares discriminant analysis is used to test the ability of the algorithm to characterize the images according to the properties of the hair samples. Most of the images (86%) were found to be classified correctly.

  18. Discovering significant evolution patterns from satellite image time series.

    Science.gov (United States)

    Petitjean, François; Masseglia, Florent; Gançarski, Pierre; Forestier, Germain

    2011-12-01

    Satellite Image Time Series (SITS) provide us with precious information on land cover evolution. By studying these series of images we can both understand the changes of specific areas and discover global phenomena that spread over larger areas. Changes that can occur throughout the sensing time can spread over very long periods and may have different start time and end time depending on the location, which complicates the mining and the analysis of series of images. This work focuses on frequent sequential pattern mining (FSPM) methods, since this family of methods fits the above-mentioned issues. This family of methods consists of finding the most frequent evolution behaviors, and is actually able to extract long-term changes as well as short term ones, whenever the change may start and end. However, applying FSPM methods to SITS implies confronting two main challenges, related to the characteristics of SITS and the domain's constraints. First, satellite images associate multiple measures with a single pixel (the radiometric levels of different wavelengths corresponding to infra-red, red, etc.), which makes the search space multi-dimensional and thus requires specific mining algorithms. Furthermore, the non evolving regions, which are the vast majority and overwhelm the evolving ones, challenge the discovery of these patterns. We propose a SITS mining framework that enables discovery of these patterns despite these constraints and characteristics. Our proposal is inspired from FSPM and provides a relevant visualization principle. Experiments carried out on 35 images sensed over 20 years show the proposed approach makes it possible to extract relevant evolution behaviors.

  19. AUTOMATED CLASSIFICATION AND SEGREGATION OF BRAIN MRI IMAGES INTO IMAGES CAPTURED WITH RESPECT TO VENTRICULAR REGION AND EYE-BALL REGION

    Directory of Open Access Journals (Sweden)

    C. Arunkumar

    2014-05-01

    Full Text Available Magnetic Resonance Imaging (MRI images of the brain are used for detection of various brain diseases including tumor. In such cases, classification of MRI images captured with respect to ventricular and eye ball regions helps in automated location and classification of such diseases. The methods employed in the paper can segregate the given MRI images of brain into images of brain captured with respect to ventricular region and images of brain captured with respect to eye ball region. First, the given MRI image of brain is segmented using Particle Swarm Optimization (PSO algorithm, which is an optimized algorithm for MRI image segmentation. The algorithm proposed in the paper is then applied on the segmented image. The algorithm detects whether the image consist of a ventricular region or an eye ball region and classifies it accordingly.

  20. Foodstuff Survey Around a Major Nuclear Facility with Test of Satellite Image Application

    International Nuclear Information System (INIS)

    Fledderman, P.D.

    1999-01-01

    'A foodstuff survey was performed around the Savannah River Site, Aiken SC. It included a census of buildings and fields within 5 km of the boundary and determination of the locations and amounts of crops grown within 80 km of SRS center. Recent information for this region was collected on the amounts of meat, poultry, milk, and eggs produced, of deer hunted, and of sports fish caught. The locations and areas devoted to growing each crop were determined in two ways: by the usual process of assuming uniform crop distribution in each county on the basis of agricultural statistics reported by state agencies, and by analysis of two LANDSAT TM images obtained in May and September. For use with environmental radionuclide transfer and radiation dose calculation codes, locations within 80 km were defined for 64 sections by 16 sectors centered on the Site and by 16-km distance intervals from 16 km to 80 km. Most locally-raised foodstuff was distributed regionally and not retained locally for consumption. For four food crops, the amounts per section based on county agricultural statistics prorated by area were compared with the amounts per section based on satellite image analysis. The median ratios of the former to the latter were 0.6 - 0.7, suggesting that the two approaches are comparable but that satellite image analysis gave consistently higher amounts. Use of satellite image analysis is recommended on the basis of these findings to obtain site-specific, as compared to area-averaged, information on crop locations in conjunction with radionuclide pathway modelling. Some improvements in technique are suggested for satellite image application to characterize additional crops.'

  1. A Public Image Database for Benchmark of Plant Seedling Classification Algorithms

    DEFF Research Database (Denmark)

    Giselsson, Thomas Mosgaard; Nyholm Jørgensen, Rasmus; Jensen, Peter Kryger

    A database of images of approximately 960 unique plants belonging to 12 species at several growth stages is made publicly available. It comprises annotated RGB images with a physical resolution of roughly 10 pixels per mm. To standardise the evaluation of classification results obtained...

  2. Polar-Orbiting Satellite (POES) Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Visible and Infrared satellite imagery taken from camera systems or radiometer instruments on satellites in orbit around the poles. Satellite campaigns include...

  3. 3D reconstruction from multi-view VHR-satellite images in MicMac

    Science.gov (United States)

    Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur

    2018-05-01

    This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.

  4. The Ilac-Project Supporting Ancient Coin Classification by Means of Image Analysis

    Science.gov (United States)

    Kavelar, A.; Zambanini, S.; Kampel, M.; Vondrovec, K.; Siegl, K.

    2013-07-01

    This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  5. THE ILAC-PROJECT: SUPPORTING ANCIENT COIN CLASSIFICATION BY MEANS OF IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    A. Kavelar

    2013-07-01

    Full Text Available This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  6. Shift-invariant discrete wavelet transform analysis for retinal image classification.

    Science.gov (United States)

    Khademi, April; Krishnan, Sridhar

    2007-12-01

    This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.

  7. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    Science.gov (United States)

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  8. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification

    Directory of Open Access Journals (Sweden)

    Xinzheng Zhang

    2016-09-01

    Full Text Available Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ 1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance.

  9. Exploiting machine learning algorithms for tree species classification in a semiarid woodland using RapidEye image

    CSIR Research Space (South Africa)

    Adelabu, S

    2013-11-01

    Full Text Available in semiarid environments. In this study, we examined the suitability of 5-band RapidEye satellite data for the classification of five tree species in mopane woodland of Botswana using machine leaning algorithms with limited training samples. We performed...

  10. Optimizing the Attitude Control of Small Satellite Constellations for Rapid Response Imaging

    Science.gov (United States)

    Nag, S.; Li, A.

    2016-12-01

    Distributed Space Missions (DSMs) such as formation flight and constellations, are being recognized as important solutions to increase measurement samples over space and time. Given the increasingly accurate attitude control systems emerging in the commercial market, small spacecraft now have the ability to slew and point within few minutes of notice. In spite of hardware development in CubeSats at the payload (e.g. NASA InVEST) and subsystems (e.g. Blue Canyon Technologies), software development for tradespace analysis in constellation design (e.g. Goddard's TAT-C), planning and scheduling development in single spacecraft (e.g. GEO-CAPE) and aerial flight path optimizations for UAVs (e.g. NASA Sensor Web), there is a gap in open-source, open-access software tools for planning and scheduling distributed satellite operations in terms of pointing and observing targets. This paper will demonstrate results from a tool being developed for scheduling pointing operations of narrow field-of-view (FOV) sensors over mission lifetime to maximize metrics such as global coverage and revisit statistics. Past research has shown the need for at least fourteen satellites to cover the Earth globally everyday using a LandSat-like sensor. Increasing the FOV three times reduces the need to four satellites, however adds image distortion and BRDF complexities to the observed reflectance. If narrow FOV sensors on a small satellite constellation were commanded using robust algorithms to slew their sensor dynamically, they would be able to coordinately cover the global landmass much faster without compensating for spatial resolution or BRDF effects. Our algorithm to optimize constellation satellite pointing is based on a dynamic programming approach under the constraints of orbital mechanics and existing attitude control systems for small satellites. As a case study for our algorithm, we minimize the time required to cover the 17000 Landsat images with maximum signal to noise ratio fall

  11. Efficient HIK SVM learning for image classification.

    Science.gov (United States)

    Wu, Jianxin

    2012-10-01

    Histograms are used in almost every aspect of image processing and computer vision, from visual descriptors to image representations. Histogram intersection kernel (HIK) and support vector machine (SVM) classifiers are shown to be very effective in dealing with histograms. This paper presents contributions concerning HIK SVM for image classification. First, we propose intersection coordinate descent (ICD), a deterministic and scalable HIK SVM solver. ICD is much faster than, and has similar accuracies to, general purpose SVM solvers and other fast HIK SVM training methods. We also extend ICD to the efficient training of a broader family of kernels. Second, we show an important empirical observation that ICD is not sensitive to the C parameter in SVM, and we provide some theoretical analyses to explain this observation. ICD achieves high accuracies in many problems, using its default parameters. This is an attractive property for practitioners, because many image processing tasks are too large to choose SVM parameters using cross-validation.

  12. Asian Dust Weather Categorization with Satellite and Surface Observations

    Science.gov (United States)

    Lin, Tang-Huang; Hsu, N. Christina; Tsay, Si-Chee; Huang, Shih-Jen

    2011-01-01

    This study categorizes various dust weather types by means of satellite remote sensing over central Asia. Airborne dust particles can be identified by satellite remote sensing because of the different optical properties exhibited by coarse and fine particles (i.e. varying particle sizes). If a correlation can be established between the retrieved aerosol optical properties and surface visibility, the intensity of dust weather can be more effectively and consistently discerned using satellite rather than surface observations. In this article, datasets consisting of collocated products from Moderate Resolution Imaging Spectroradiometer Aqua and surface measurements are analysed. The results indicate an exponential relationship between the surface visibility and the satellite-retrieved aerosol optical depth, which is subsequently used to categorize the dust weather. The satellite-derived spatial frequency distributions in the dust weather types are consistent with China s weather station reports during 2003, indicating that dust weather classification using satellite data is highly feasible. Although the period during the springtime from 2004 to 2007 may be not sufficient for statistical significance, our results reveal an increasing tendency in both intensity and frequency of dust weather over central Asia during this time period.

  13. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  14. Lidar-based individual tree species classification using convolutional neural network

    Science.gov (United States)

    Mizoguchi, Tomohiro; Ishii, Akira; Nakamura, Hiroyuki; Inoue, Tsuyoshi; Takamatsu, Hisashi

    2017-06-01

    Terrestrial lidar is commonly used for detailed documentation in the field of forest inventory investigation. Recent improvements of point cloud processing techniques enabled efficient and precise computation of an individual tree shape parameters, such as breast-height diameter, height, and volume. However, tree species are manually specified by skilled workers to date. Previous works for automatic tree species classification mainly focused on aerial or satellite images, and few works have been reported for classification techniques using ground-based sensor data. Several candidate sensors can be considered for classification, such as RGB or multi/hyper spectral cameras. Above all candidates, we use terrestrial lidar because it can obtain high resolution point cloud in the dark forest. We selected bark texture for the classification criteria, since they clearly represent unique characteristics of each tree and do not change their appearance under seasonable variation and aged deterioration. In this paper, we propose a new method for automatic individual tree species classification based on terrestrial lidar using Convolutional Neural Network (CNN). The key component is the creation step of a depth image which well describe the characteristics of each species from a point cloud. We focus on Japanese cedar and cypress which cover the large part of domestic forest. Our experimental results demonstrate the effectiveness of our proposed method.

  15. An Approach to Orbital Image Classification for the Assessment of Potato Plantation Areas

    Directory of Open Access Journals (Sweden)

    Vassiliki Terezinha Galvão Boulomytis

    2013-12-01

    Full Text Available In the city of Bueno Brandão, South of Minas Gerais State, Brazil, the Watershed of Rio das Antas is located prior to the public water supply and is susceptible to hydro-degradation due to the intensive agricultural activities developed in the area. The potato plantation is the most significant cropping in the city. Because of the possibility of interfering in the preservation areas, mainly the ones surrounding water courses and springs, it is very important to do the assessment of the plantation sites, in order to avoid the risk of water contamination. The procedures adopted by the agro activity farmers generally present the following features: intensive use of agro-chemicals, cropping in places with slopes which are higher than 20%, close to or in permanent preservation areas. The scope of this study was to develop the proper methodology for the assessment of the plantation areas, regarding the short time of procedure, as the period between the plantation and the harvest occurs in six months the furthest. These areas vary year in year out, as the plantation sites often change due to the land degradation. Because of that, geotechnologies are recommended to detect the plantation areas by the use of satellite images and accurate data processing. Considering the availability of LANDSAT medium resolution images, methods for their appropriate classification were approached to provide effective target detection.

  16. Tongue Images Classification Based on Constrained High Dispersal Network

    Directory of Open Access Journals (Sweden)

    Dan Meng

    2017-01-01

    Full Text Available Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM. However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN, we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  17. Study of Image Analysis Algorithms for Segmentation, Feature Extraction and Classification of Cells

    Directory of Open Access Journals (Sweden)

    Margarita Gamarra

    2017-08-01

    Full Text Available Recent advances in microcopy and improvements in image processing algorithms have allowed the development of computer-assisted analytical approaches in cell identification. Several applications could be mentioned in this field: Cellular phenotype identification, disease detection and treatment, identifying virus entry in cells and virus classification; these applications could help to complement the opinion of medical experts. Although many surveys have been presented in medical image analysis, they focus mainly in tissues and organs and none of the surveys about image cells consider an analysis following the stages in the typical image processing: Segmentation, feature extraction and classification. The goal of this study is to provide comprehensive and critical analyses about the trends in each stage of cell image processing. In this paper, we present a literature survey about cell identification using different image processing techniques.

  18. IMPACTS OF PATCH SIZE AND LANDSCAPE HETEROGENEITY ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    Science.gov (United States)

    Impacts of Patch Size and Landscape Heterogeneity on Thematic Image Classification Accuracy. Currently, most thematic accuracy assessments of classified remotely sensed images oily account for errors between the various classes employed, at particular pixels of interest, thu...

  19. Lineament systems indentification in Banten site using Spot 5 satellite image

    International Nuclear Information System (INIS)

    Yuliastuti; Heni Susiati; Yunus Daud; A-Sarwiyana Sastratenaya

    2013-01-01

    Lineament systems identification in Banten site using SPOT 5 satellite image has been performed. Based on regional site survey in Java Island, Banten is one of the potential candidate sites. The objective of this study was to determine direction and chronology of regional lineament morphology which was consider as fault or faulting in Banten site. The methodology used this study covered satellite image cropping, band selection, edge enhancement filtering, lineament extraction and lineament analysis. Result of the study showed that there were three dominant lineament groups, namely N-S, NW-SE, and E-W. Based on the forming chronology of the lineament, N-S group was the oldest one, followed by E-W group and NW-SE as the youngest group. These lineament groups have been confirmed as a manifestation of fault system structure. (author)

  20. Novelty detection for breast cancer image classification

    Science.gov (United States)

    Cichosz, Pawel; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold

    2016-09-01

    Using classification learning algorithms for medical applications may require not only refined model creation techniques and careful unbiased model evaluation, but also detecting the risk of misclassification at the time of model application. This is addressed by novelty detection, which identifies instances for which the training set is not sufficiently representative and for which it may be safer to restrain from classification and request a human expert diagnosis. The paper investigates two techniques for isolated instance identification, based on clustering and one-class support vector machines, which represent two different approaches to multidimensional outlier detection. The prediction quality for isolated instances in breast cancer image data is evaluated using the random forest algorithm and found to be substantially inferior to the prediction quality for non-isolated instances. Each of the two techniques is then used to create a novelty detection model which can be combined with a classification model and used at the time of prediction to detect instances for which the latter cannot be reliably applied. Novelty detection is demonstrated to improve random forest prediction quality and argued to deserve further investigation in medical applications.

  1. Semi-Automatic Classification Of Histopathological Images: Dealing With Inter-Slide Variations

    Directory of Open Access Journals (Sweden)

    Michael Gadermayr

    2016-06-01

    In case of 50 available labelled sample patches of a certain whole slide image, the overall classification rate increased from 92 % to 98 % through including the interactive labelling step. Even with only 20 labelled patches, accuracy already increased to 97 %. Without a pre-trained model, if training is performed on target domain data only, 88 % (20 labelled samples and 95 % (50 labelled samples accuracy, respectively, were obtained. If enough target domain data was available (about 20 images, the amount of source domain data was of minor relevance. The difference in outcome between a source domain training data set containing 100 patches from one whole slide image and a set containing 700 patches from seven images was lower than 1 %. Contrarily, without target domain data, the difference in accuracy was 10 % (82 % compared to 92 % between these two settings. Execution runtime between two interaction steps is significantly below one second (0.23 s, which is an important usability criterion. It proved to be beneficial to select specific target domain data in an active learning sense based on the currently available trained model. While experimental evaluation provided strong empirical evidence for increased classification performance with the proposed method, the additional manual effort can be kept at a low level. The labelling of e.g. 20 images per slide is surely less time consuming than the validation of a complete whole slide image processed with a fully automatic, but less reliable, segmentation approach. Finally, it should be highlighted that the proposed interaction protocol could easily be adapted to other histopathological classification or segmentation tasks, also for implementation in a clinical system.  

  2. Exploration of mineral resource deposits based on analysis of aerial and satellite image data employing artificial intelligence methods

    Science.gov (United States)

    Osipov, Gennady

    2013-04-01

    We propose a solution to the problem of exploration of various mineral resource deposits, determination of their forms / classification of types (oil, gas, minerals, gold, etc.) with the help of satellite photography of the region of interest. Images received from satellite are processed and analyzed to reveal the presence of specific signs of deposits of various minerals. Course of data processing and making forecast can be divided into some stages: Pre-processing of images. Normalization of color and luminosity characteristics, determination of the necessary contrast level and integration of a great number of separate photos into a single map of the region are performed. Construction of semantic map image. Recognition of bitmapped image and allocation of objects and primitives known to system are realized. Intelligent analysis. At this stage acquired information is analyzed with the help of a knowledge base, which contain so-called "attention landscapes" of experts. Used methods of recognition and identification of images: a) combined method of image recognition, b)semantic analysis of posterized images, c) reconstruction of three-dimensional objects from bitmapped images, d)cognitive technology of processing and interpretation of images. This stage is fundamentally new and it distinguishes suggested technology from all others. Automatic registration of allocation of experts` attention - registration of so-called "attention landscape" of experts - is the base of the technology. Landscapes of attention are, essentially, highly effective filters that cut off unnecessary information and emphasize exactly the factors used by an expert for making a decision. The technology based on denoted principles involves the next stages, which are implemented in corresponding program agents. Training mode -> Creation of base of ophthalmologic images (OI) -> Processing and making generalized OI (GOI) -> Mode of recognition and interpretation of unknown images. Training mode

  3. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon

    Science.gov (United States)

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio

    2009-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716

  4. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon.

    Science.gov (United States)

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio

    2008-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.

  5. Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images

    Directory of Open Access Journals (Sweden)

    Nina Merkle

    2017-06-01

    Full Text Available Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like monitoring by image time series or scene analysis after sudden events. These tasks require geo-referenced and precisely co-registered multi-sensor data. Images captured by the high resolution synthetic aperture radar (SAR satellite TerraSAR-X exhibit an absolute geo-location accuracy within a few decimeters. These images represent therefore a reliable source to improve the geo-location accuracy of optical images, which is in the order of tens of meters. In this paper, a deep learning-based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data is investigated. Image registration between SAR and optical images requires few, but accurate and reliable matching points. These are derived from a Siamese neural network. The network is trained using TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe, in order to learn the two-dimensional spatial shifts between optical and SAR image patches. Results confirm that accurate and reliable matching points can be generated with higher matching accuracy and precision with respect to state-of-the-art approaches.

  6. Convolutional Neural Network for Multi-Source Deep Learning Crop Classification in Ukraine

    Science.gov (United States)

    Lavreniuk, M. S.

    2016-12-01

    Land cover and crop type maps are one of the most essential inputs when dealing with environmental and agriculture monitoring tasks [1]. During long time neural network (NN) approach was one of the most efficient and popular approach for most applications, including crop classification using remote sensing data, with high an overall accuracy (OA) [2]. In the last years the most popular and efficient method for multi-sensor and multi-temporal land cover classification is convolution neural networks (CNNs). Taking into account presence clouds in optical data, self-organizing Kohonen maps (SOMs) are used to restore missing pixel values in a time series of optical imagery from Landsat-8 satellite. After missing data restoration, optical data from Landsat-8 was merged with Sentinel-1A radar data for better crop types discrimination [3]. An ensemble of CNNs is proposed for multi-temporal satellite images supervised classification. Each CNN in the corresponding ensemble is a 1-d CNN with 4 layers implemented using the Google's library TensorFlow. The efficiency of the proposed approach was tested on a time-series of Landsat-8 and Sentinel-1A images over the JECAM test site (Kyiv region) in Ukraine in 2015. Overall classification accuracy for ensemble of CNNs was 93.5% that outperformed an ensemble of multi-layer perceptrons (MLPs) by +0.8% and allowed us to better discriminate summer crops, in particular maize and soybeans. For 2016 we would like to validate this method using Sentinel-1 and Sentinel-2 data for Ukraine territory within ESA project on country level demonstration Sen2Agri. 1. A. Kolotii et al., "Comparison of biophysical and satellite predictors for wheat yield forecasting in Ukraine," The Int. Arch. of Photogram., Rem. Sens. and Spatial Inform. Scie., vol. 40, no. 7, pp. 39-44, 2015. 2. F. Waldner et al., "Towards a set of agrosystem-specific cropland mapping methods to address the global cropland diversity," Int. Journal of Rem. Sens. vol. 37, no. 14, pp

  7. Classification in hyperspectral images by independent component analysis, segmented cross-validation and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    Beatriz Galindo-Prieto

    2018-02-01

    Full Text Available Independent component analysis combined with various strategies for cross-validation, uncertainty estimates by jack-knifing and critical Hotelling’s T2 limits estimation, proposed in this paper, is used for classification purposes in hyperspectral images. To the best of our knowledge, the combined approach of methods used in this paper has not been previously applied to hyperspectral imaging analysis for interpretation and classification in the literature. The data analysis performed here aims to distinguish between four different types of plastics, some of them containing brominated flame retardants, from their near infrared hyperspectral images. The results showed that the method approach used here can be successfully used for unsupervised classification. A comparison of validation approaches, especially leave-one-out cross-validation and regions of interest scheme validation is also evaluated.

  8. Fine-grained leukocyte classification with deep residual learning for microscopic images.

    Science.gov (United States)

    Qin, Feiwei; Gao, Nannan; Peng, Yong; Wu, Zizhao; Shen, Shuying; Grudtsin, Artur

    2018-08-01

    Leukocyte classification and cytometry have wide applications in medical domain, previous researches usually exploit machine learning techniques to classify leukocytes automatically. However, constrained by the past development of machine learning techniques, for example, extracting distinctive features from raw microscopic images are difficult, the widely used SVM classifier only has relative few parameters to tune, these methods cannot efficiently handle fine-grained classification cases when the white blood cells have up to 40 categories. Based on deep learning theory, a systematic study is conducted on finer leukocyte classification in this paper. A deep residual neural network based leukocyte classifier is constructed at first, which can imitate the domain expert's cell recognition process, and extract salient features robustly and automatically. Then the deep neural network classifier's topology is adjusted according to the prior knowledge of white blood cell test. After that the microscopic image dataset with almost one hundred thousand labeled leukocytes belonging to 40 categories is built, and combined training strategies are adopted to make the designed classifier has good generalization ability. The proposed deep residual neural network based classifier was tested on microscopic image dataset with 40 leukocyte categories. It achieves top-1 accuracy of 77.80%, top-5 accuracy of 98.75% during the training procedure. The average accuracy on the test set is nearly 76.84%. This paper presents a fine-grained leukocyte classification method for microscopic images, based on deep residual learning theory and medical domain knowledge. Experimental results validate the feasibility and effectiveness of our approach. Extended experiments support that the fine-grained leukocyte classifier could be used in real medical applications, assist doctors in diagnosing diseases, reduce human power significantly. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. HEp-2 cell image classification method based on very deep convolutional networks with small datasets

    Science.gov (United States)

    Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping

    2017-07-01

    Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.

  10. Probability Density Components Analysis: A New Approach to Treatment and Classification of SAR Images

    Directory of Open Access Journals (Sweden)

    Osmar Abílio de Carvalho Júnior

    2014-04-01

    Full Text Available Speckle noise (salt and pepper is inherent to synthetic aperture radar (SAR, which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA, a new alternative that combines filtering and frequency histogram to improve the classification procedure for the single-channel synthetic aperture radar (SAR images. This method was tested on L-band SAR data from the Advanced Land Observation System (ALOS Phased-Array Synthetic-Aperture Radar (PALSAR sensor. The study area is localized in the Brazilian Amazon rainforest, northern Rondônia State (municipality of Candeias do Jamari, containing forest and land use patterns. The proposed algorithm uses a moving window over the image, estimating the probability density curve in different image components. Therefore, a single input image generates an output with multi-components. Initially the multi-components should be treated by noise-reduction methods, such as maximum noise fraction (MNF or noise-adjusted principal components (NAPCs. Both methods enable reducing noise as well as the ordering of multi-component data in terms of the image quality. In this paper, the NAPC applied to multi-components provided large reductions in the noise levels, and the color composites considering the first NAPC enhance the classification of different surface features. In the spectral classification, the Spectral Correlation Mapper and Minimum Distance were used. The results obtained presented as similar to the visual interpretation of optical images from TM-Landsat and Google Maps.

  11. The fusion of satellite and UAV data: simulation of high spatial resolution band

    Science.gov (United States)

    Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata

    2017-10-01

    Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.

  12. Solar resources estimation combining digital terrain models and satellite images techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bosch, J.L.; Batlles, F.J. [Universidad de Almeria, Departamento de Fisica Aplicada, Ctra. Sacramento s/n, 04120-Almeria (Spain); Zarzalejo, L.F. [CIEMAT, Departamento de Energia, Madrid (Spain); Lopez, G. [EPS-Universidad de Huelva, Departamento de Ingenieria Electrica y Termica, Huelva (Spain)

    2010-12-15

    One of the most important steps to make use of any renewable energy is to perform an accurate estimation of the resource that has to be exploited. In the designing process of both active and passive solar energy systems, radiation data is required for the site, with proper spatial resolution. Generally, a radiometric stations network is used in this evaluation, but when they are too dispersed or not available for the study area, satellite images can be utilized as indirect solar radiation measurements. Although satellite images cover wide areas with a good acquisition frequency they usually have a poor spatial resolution limited by the size of the image pixel, and irradiation must be interpolated to evaluate solar irradiation at a sub-pixel scale. When pixels are located in flat and homogeneous areas, correlation of solar irradiation is relatively high, and classic interpolation can provide a good estimation. However, in complex topography zones, data interpolation is not adequate and the use of Digital Terrain Model (DTM) information can be helpful. In this work, daily solar irradiation is estimated for a wide mountainous area using a combination of Meteosat satellite images and a DTM, with the advantage of avoiding the necessity of ground measurements. This methodology utilizes a modified Heliosat-2 model, and applies for all sky conditions; it also introduces a horizon calculation of the DTM points and accounts for the effect of snow covers. Model performance has been evaluated against data measured in 12 radiometric stations, with results in terms of the Root Mean Square Error (RMSE) of 10%, and a Mean Bias Error (MBE) of +2%, both expressed as a percentage of the mean value measured. (author)

  13. THE ANALYSIS OF MOISTURE DEFICIT BASED ON MODIS AND LANDSAT SATELLITE IMAGES. CASE STUDY: THE OLTENIA PLAIN

    Directory of Open Access Journals (Sweden)

    ONȚEL IRINA

    2014-03-01

    Full Text Available Satellite images are an important source of information to identify and analyse some hazardous climatic phenomena such as the dryness and drought. These phenomena are characterized by scarce rainfall, increased evapotranspiration and high soil moisture deficit. The soil water reserve depletes to the wilting coefficient, soon followed by the pedological drought which has negative effects on vegetation and agricultural productivity. The MODIS satellite images (Moderate Resolution Imaging Spectroradiometer allow the monitoring of the vegetation throughout the entire vegetative period, with a frequency of 1-2 days and with a spatial resolution of 250 m, 500 m and 1 km away. Another useful source of information is the LANDSAT satellite images, with a spatial resolution of 30 m. Based on MODIS and Landsat satellite images, were calculated moisture monitoring index such as SIWSI (Shortwave Infrared Water Stress Index. Consequently, some years with low moisture such as 2000, 2002, 2007 and 2012 could be identified. Spatially, the areas with moisture deficit varied from one year to another all over the whole analised period (2000-2012. The remote sensing results was corelated with Standard Precipitation Anomaly, which gives a measure of the severity of a wet or dry event.

  14. Classification of underground pipe scanned images using feature extraction and neuro-fuzzy algorithm.

    Science.gov (United States)

    Sinha, S K; Karray, F

    2002-01-01

    Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.

  15. Cluster Validity Classification Approaches Based on Geometric Probability and Application in the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    LI Jian-Wei

    2014-08-01

    Full Text Available On the basis of the cluster validity function based on geometric probability in literature [1, 2], propose a cluster analysis method based on geometric probability to process large amount of data in rectangular area. The basic idea is top-down stepwise refinement, firstly categories then subcategories. On all clustering levels, use the cluster validity function based on geometric probability firstly, determine clusters and the gathering direction, then determine the center of clustering and the border of clusters. Through TM remote sensing image classification examples, compare with the supervision and unsupervised classification in ERDAS and the cluster analysis method based on geometric probability in two-dimensional square which is proposed in literature 2. Results show that the proposed method can significantly improve the classification accuracy.

  16. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    Science.gov (United States)

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  17. Supervised Classification High-Resolution Remote-Sensing Image Based on Interval Type-2 Fuzzy Membership Function

    Directory of Open Access Journals (Sweden)

    Chunyan Wang

    2018-05-01

    Full Text Available Because of the degradation of classification accuracy that is caused by the uncertainty of pixel class and classification decisions of high-resolution remote-sensing images, we proposed a supervised classification method that is based on an interval type-2 fuzzy membership function for high-resolution remote-sensing images. We analyze the data features of a high-resolution remote-sensing image and construct a type-1 membership function model in a homogenous region by supervised sampling in order to characterize the uncertainty of the pixel class. On the basis of the fuzzy membership function model in the homogeneous region and in accordance with the 3σ criterion of normal distribution, we proposed a method for modeling three types of interval type-2 membership functions and analyze the different types of functions to improve the uncertainty of pixel class expressed by the type-1 fuzzy membership function and to enhance the accuracy of classification decision. According to the principle that importance will increase with a decrease in the distance between the original, upper, and lower fuzzy membership of the training data and the corresponding frequency value in the histogram, we use the weighted average sum of three types of fuzzy membership as the new fuzzy membership of the pixel to be classified and then integrated into the neighborhood pixel relations, constructing a classification decision model. We use the proposed method to classify real high-resolution remote-sensing images and synthetic images. Additionally, we qualitatively and quantitatively evaluate the test results. The results show that a higher classification accuracy can be achieved with the proposed algorithm.

  18. BOREAS TE-18 Landsat TM Physical Classification Image of the NSA

    Science.gov (United States)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 21-Jun-1995 was used to derive the classification. A technique was implemented that uses reflectances of various land cover types along with a geometric optical canopy model to produce spectral trajectories. These trajectories are used in a way that is similar to training data to classify the image into the different land cover classes. The data are provided in a binary, image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  19. Simultaneous hierarchical segmentation and vectorization of satellite images through combined data sampling and anisotropic triangulation

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Prasad, Lakshman [Los Alamos National Laboratory; Dillard, Scott [PNNL

    2010-10-21

    The automatic detection, recognition , and segmentation of object classes in remote sensed images is of crucial importance for scene interpretation and understanding. However, it is a difficult task because of the high variability of satellite data. Indeed, the observed scenes usually exhibit a high degree of complexity, where complexity refers to the large variety of pictorial representations of objects with the same semantic meaning and also to the extensive amount of available det.ails. Therefore, there is still a strong demand for robust techniques for automatic information extraction and interpretation of satellite images. In parallel, there is a growing interest in techniques that can extract vector features directly from such imagery. In this paper, we investigate the problem of automatic hierarchical segmentation and vectorization of multispectral satellite images. We propose a new algorithm composed of the following steps: (i) a non-uniform sampling scheme extracting most salient pixels in the image, (ii) an anisotropic triangulation constrained by the sampled pixels taking into account both strength and directionality of local structures present in the image, (iii) a polygonal grouping scheme merging, through techniques based on perceptual information , the obtained segments to a smaller quantity of superior vectorial objects. Besides its computational efficiency, this approach provides a meaningful polygonal representation for subsequent image analysis and/or interpretation.

  20. The effects of rectification and Global Positioning System errors on satellite image-based estimates of forest area

    Science.gov (United States)

    Ronald E. McRoberts

    2010-01-01

    Satellite image-based maps of forest attributes are of considerable interest and are used for multiple purposes such as international reporting by countries that have no national forest inventory and small area estimation for all countries. Construction of the maps typically entails, in part, rectifying the satellite images to a geographic coordinate system, observing...

  1. Accuracy comparison of Pléiades satellite ortho-images using GPS ...

    African Journals Online (AJOL)

    resolution satellite ortho-image when different types of ground control are used. This required the execution of two orthorectification tests where only the type of GCPs differed. The results of these tests were interesting since it highlighted the ...

  2. Detection of High-Density Crowds in Aerial Images Using Texture Classification

    Directory of Open Access Journals (Sweden)

    Oliver Meynberg

    2016-06-01

    Full Text Available Automatic crowd detection in aerial images is certainly a useful source of information to prevent crowd disasters in large complex scenarios of mass events. A number of publications employ regression-based methods for crowd counting and crowd density estimation. However, these methods work only when a correct manual count is available to serve as a reference. Therefore, it is the objective of this paper to detect high-density crowds in aerial images, where counting– or regression–based approaches would fail. We compare two texture–classification methodologies on a dataset of aerial image patches which are grouped into ranges of different crowd density. These methodologies are: (1 a Bag–of–words (BoW model with two alternative local features encoded as Improved Fisher Vectors and (2 features based on a Gabor filter bank. Our results show that a classifier using either BoW or Gabor features can detect crowded image regions with 97% classification accuracy. In our tests of four classes of different crowd-density ranges, BoW–based features have a 5%–12% better accuracy than Gabor.

  3. Convolutional neural network-based classification system design with compressed wireless sensor network images.

    Science.gov (United States)

    Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil

    2018-01-01

    With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.

  4. Threshold selection for classification of MR brain images by clustering method

    Energy Technology Data Exchange (ETDEWEB)

    Moldovanu, Simona [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania); Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi (Romania); Obreja, Cristian; Moraru, Luminita, E-mail: luminita.moraru@ugal.ro [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania)

    2015-12-07

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  5. Hybrid image classification technique for land-cover mapping in the Arctic tundra, North Slope, Alaska

    Science.gov (United States)

    Chaudhuri, Debasish

    Remotely sensed image classification techniques are very useful to understand vegetation patterns and species combination in the vast and mostly inaccessible arctic region. Previous researches that were done for mapping of land cover and vegetation in the remote areas of northern Alaska have considerably low accuracies compared to other biomes. The unique arctic tundra environment with short growing season length, cloud cover, low sun angles, snow and ice cover hinders the effectiveness of remote sensing studies. The majority of image classification research done in this area as reported in the literature used traditional unsupervised clustering technique with Landsat MSS data. It was also emphasized by previous researchers that SPOT/HRV-XS data lacked the spectral resolution to identify the small arctic tundra vegetation parcels. Thus, there is a motivation and research need to apply a new classification technique to develop an updated, detailed and accurate vegetation map at a higher spatial resolution i.e. SPOT-5 data. Traditional classification techniques in remotely sensed image interpretation are based on spectral reflectance values with an assumption of the training data being normally distributed. Hence it is difficult to add ancillary data in classification procedures to improve accuracy. The purpose of this dissertation was to develop a hybrid image classification approach that effectively integrates ancillary information into the classification process and combines ISODATA clustering, rule-based classifier and the Multilayer Perceptron (MLP) classifier which uses artificial neural network (ANN). The main goal was to find out the best possible combination or sequence of classifiers for typically classifying tundra type vegetation that yields higher accuracy than the existing classified vegetation map from SPOT data. Unsupervised ISODATA clustering and rule-based classification techniques were combined to produce an intermediate classified map which was

  6. Improved medical image modality classification using a combination of visual and textual features.

    Science.gov (United States)

    Dimitrovski, Ivica; Kocev, Dragi; Kitanovski, Ivan; Loskovska, Suzana; Džeroski, Sašo

    2015-01-01

    In this paper, we present the approach that we applied to the medical modality classification tasks at the ImageCLEF evaluation forum. More specifically, we used the modality classification databases from the ImageCLEF competitions in 2011, 2012 and 2013, described by four visual and one textual types of features, and combinations thereof. We used local binary patterns, color and edge directivity descriptors, fuzzy color and texture histogram and scale-invariant feature transform (and its variant opponentSIFT) as visual features and the standard bag-of-words textual representation coupled with TF-IDF weighting. The results from the extensive experimental evaluation identify the SIFT and opponentSIFT features as the best performing features for modality classification. Next, the low-level fusion of the visual features improves the predictive performance of the classifiers. This is because the different features are able to capture different aspects of an image, their combination offering a more complete representation of the visual content in an image. Moreover, adding textual features further increases the predictive performance. Finally, the results obtained with our approach are the best results reported on these databases so far. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Object-Oriented Semisupervised Classification of VHR Images by Combining MedLDA and a Bilateral Filter

    Directory of Open Access Journals (Sweden)

    Shi He

    2015-01-01

    Full Text Available A Bayesian hierarchical model is presented to classify very high resolution (VHR images in a semisupervised manner, in which both a maximum entropy discrimination latent Dirichlet allocation (MedLDA and a bilateral filter are combined into a novel application framework. The primary contribution of this paper is to nullify the disadvantages of traditional probabilistic topic models on pixel-level supervised information and to achieve the effective classification of VHR remote sensing images. This framework consists of the following two iterative steps. In the training stage, the model utilizes the central labeled pixel and its neighborhood, as a squared labeled image object, to train the classifiers. In the classification stage, each central unlabeled pixel with its neighborhood, as an unlabeled object, is classified as a user-provided geoobject class label with the maximum posterior probability. Gibbs sampling is adopted for model inference. The experimental results demonstrate that the proposed method outperforms two classical SVM-based supervised classification methods and probabilistic-topic-models-based classification methods.

  8. Dissimilarity Application in Digitized Mammographic Images Classification

    Directory of Open Access Journals (Sweden)

    Ubaldo Bottigli

    2006-06-01

    Full Text Available Purpose of this work is the development of an automatic classification system which could be useful for radiologists in the investigation of breast cancer. The software has been designed in the framework of the MAGIC-5 collaboration. In the traditional way of learning from examples of objects the classifiers are built in a feature space. However, an alternative ways can be found by constructing decision rules on dissimilarity (distance representations. In such a recognition process a new object is described by its distances to (a subset of the training samples. The use of the dissimilarities is especially of interest when features are difficult to obtain or when they have a little discriminative power. In the automatic classification system the suspicious regions with high probability to include a lesion are extracted from the image as regions of interest (ROIs. Each ROI is characterized by some features extracted from co-occurrence matrix containing spatial statistics information on ROI pixel grey tones. A dissimilarity representation of these features is made before the classification. A feed-forward neural network is employed to distinguish pathological records, from non-pathological ones by the new features. The results obtained in terms of sensitivity and specificity will be presented.

  9. OVERVIEW OF MODERN RESEARCH OF LANDSLIDES ACCORDING TO AERIAL AND SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    K. M. Lyapishev

    2015-01-01

    Full Text Available This article is an overview of researches of landslides using remote sensing methods such as aerial photography, satellite images, radar interferometry, and their combination with the use of GIS technology. Modern methods of investigation of landslides are very diverse. The authors propose different approaches to the identification, classification and monitoring of landslides. Data analysis techniques can help in creating more sophisticated approach to the analysis of landslides.

  10. Research on active imaging information transmission technology of satellite borne quantum remote sensing

    Science.gov (United States)

    Bi, Siwen; Zhen, Ming; Yang, Song; Lin, Xuling; Wu, Zhiqiang

    2017-08-01

    According to the development and application needs of Remote Sensing Science and technology, Prof. Siwen Bi proposed quantum remote sensing. Firstly, the paper gives a brief introduction of the background of quantum remote sensing, the research status and related researches at home and abroad on the theory, information mechanism and imaging experiments of quantum remote sensing and the production of principle prototype.Then, the quantization of pure remote sensing radiation field, the state function and squeezing effect of quantum remote sensing radiation field are emphasized. It also describes the squeezing optical operator of quantum light field in active imaging information transmission experiment and imaging experiments, achieving 2-3 times higher resolution than that of coherent light detection imaging and completing the production of quantum remote sensing imaging prototype. The application of quantum remote sensing technology can significantly improve both the signal-to-noise ratio of information transmission imaging and the spatial resolution of quantum remote sensing .On the above basis, Prof.Bi proposed the technical solution of active imaging information transmission technology of satellite borne quantum remote sensing, launched researches on its system composition and operation principle and on quantum noiseless amplifying devices, providing solutions and technical basis for implementing active imaging information technology of satellite borne Quantum Remote Sensing.

  11. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  12. Linear mixing model applied to coarse resolution satellite data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  13. Accessory cardiac bronchus: Proposed imaging classification on multidetector CT

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Min; Kim, Young Tong; Han, Jong Kyu; Jou, Sung Shick [Dept. of Radiology, Soonchunhyang University College of Medicine, Cheonan Hospital, Cheonan (Korea, Republic of)

    2016-02-15

    To propose the classification of accessory cardiac bronchus (ACB) based on imaging using multidetector computed tomography (MDCT), and evaluate follow-up changes of ACB. This study included 58 patients diagnosed as ACB since 9 years, using MDCT. We analyzed the types, division locations and division directions of ACB, and also evaluated changes on follow-up. We identified two main types of ACB: blind-end (51.7%) and lobule (48.3%). The blind-end ACB was further classified into three subtypes: blunt (70%), pointy (23.3%) and saccular (6.7%). The lobule ACB was also further classified into three subtypes: complete (46.4%), incomplete (28.6%) and rudimentary (25%). Division location to the upper half bronchus intermedius (79.3%) and medial direction (60.3%) were the most common in all patients. The difference in division direction was statistically significant between the blind-end and lobule types (p = 0.019). Peribronchial soft tissue was found in five cases. One calcification case was identified in the lobule type. During follow-up, ACB had disappeared in two cases of the blind-end type and in one case of the rudimentary subtype. The proposed classification of ACB based on imaging, and the follow-up CT, helped us to understand the various imaging features of ACB.

  14. Classification in medical images using adaptive metric k-NN

    Science.gov (United States)

    Chen, C.; Chernoff, K.; Karemore, G.; Lo, P.; Nielsen, M.; Lauze, F.

    2010-03-01

    The performance of the k-nearest neighborhoods (k-NN) classifier is highly dependent on the distance metric used to identify the k nearest neighbors of the query points. The standard Euclidean distance is commonly used in practice. This paper investigates the performance of k-NN classifier with respect to different adaptive metrics in the context of medical imaging. We propose using adaptive metrics such that the structure of the data is better described, introducing some unsupervised learning knowledge in k-NN. We investigated four different metrics are estimated: a theoretical metric based on the assumption that images are drawn from Brownian Image Model (BIM), the normalized metric based on variance of the data, the empirical metric is based on the empirical covariance matrix of the unlabeled data, and an optimized metric obtained by minimizing the classification error. The spectral structure of the empirical covariance also leads to Principal Component Analysis (PCA) performed on it which results the subspace metrics. The metrics are evaluated on two data sets: lateral X-rays of the lumbar aortic/spine region, where we use k-NN for performing abdominal aorta calcification detection; and mammograms, where we use k-NN for breast cancer risk assessment. The results show that appropriate choice of metric can improve classification.

  15. Robust through-the-wall radar image classification using a target-model alignment procedure.

    Science.gov (United States)

    Smith, Graeme E; Mobasseri, Bijan G

    2012-02-01

    A through-the-wall radar image (TWRI) bears little resemblance to the equivalent optical image, making it difficult to interpret. To maximize the intelligence that may be obtained, it is desirable to automate the classification of targets in the image to support human operators. This paper presents a technique for classifying stationary targets based on the high-range resolution profile (HRRP) extracted from 3-D TWRIs. The dependence of the image on the target location is discussed using a system point spread function (PSF) approach. It is shown that the position dependence will cause a classifier to fail, unless the image to be classified is aligned to a classifier-training location. A target image alignment technique based on deconvolution of the image with the system PSF is proposed. Comparison of the aligned target images with measured images shows the alignment process introducing normalized mean squared error (NMSE) ≤ 9%. The HRRP extracted from aligned target images are classified using a naive Bayesian classifier supported by principal component analysis. The classifier is tested using a real TWRI of canonical targets behind a concrete wall and shown to obtain correct classification rates ≥ 97%. © 2011 IEEE

  16. Multiscale Geoscene Segmentation for Extracting Urban Functional Zones from VHR Satellite Images

    Directory of Open Access Journals (Sweden)

    Xiuyuan Zhang

    2018-02-01

    Full Text Available Urban functional zones, such as commercial, residential, and industrial zones, are basic units of urban planning, and play an important role in monitoring urbanization. However, historical functional-zone maps are rarely available for cities in developing countries, as traditional urban investigations focus on geographic objects rather than functional zones. Recent studies have sought to extract functional zones automatically from very-high-resolution (VHR satellite images, and they mainly concentrate on classification techniques, but ignore zone segmentation which delineates functional-zone boundaries and is fundamental to functional-zone analysis. To resolve the issue, this study presents a novel segmentation method, geoscene segmentation, which can identify functional zones at multiple scales by aggregating diverse urban objects considering their features and spatial patterns. In experiments, we applied this method to three Chinese cities—Beijing, Putian, and Zhuhai—and generated detailed functional-zone maps with diverse functional categories. These experimental results indicate our method effectively delineates urban functional zones with VHR imagery; different categories of functional zones extracted by using different scale parameters; and spatial patterns that are more important than the features of individual objects in extracting functional zones. Accordingly, the presented multiscale geoscene segmentation method is important for urban-functional-zone analysis, and can provide valuable data for city planners.

  17. Classification of breast cancer histology images using Convolutional Neural Networks.

    Directory of Open Access Journals (Sweden)

    Teresa Araújo

    Full Text Available Breast cancer is one of the main causes of cancer death worldwide. The diagnosis of biopsy tissue with hematoxylin and eosin stained images is non-trivial and specialists often disagree on the final diagnosis. Computer-aided Diagnosis systems contribute to reduce the cost and increase the efficiency of this process. Conventional classification approaches rely on feature extraction methods designed for a specific problem based on field-knowledge. To overcome the many difficulties of the feature-based approaches, deep learning methods are becoming important alternatives. A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs is proposed. Images are classified in four classes, normal tissue, benign lesion, in situ carcinoma and invasive carcinoma, and in two classes, carcinoma and non-carcinoma. The architecture of the network is designed to retrieve information at different scales, including both nuclei and overall tissue organization. This design allows the extension of the proposed system to whole-slide histology images. The features extracted by the CNN are also used for training a Support Vector Machine classifier. Accuracies of 77.8% for four class and 83.3% for carcinoma/non-carcinoma are achieved. The sensitivity of our method for cancer cases is 95.6%.

  18. Feature Importance for Human Epithelial (HEp-2 Cell Image Classification

    Directory of Open Access Journals (Sweden)

    Vibha Gupta

    2018-02-01

    Full Text Available Indirect Immuno-Fluorescence (IIF microscopy imaging of human epithelial (HEp-2 cells is a popular method for diagnosing autoimmune diseases. Considering large data volumes, computer-aided diagnosis (CAD systems, based on image-based classification, can help in terms of time, effort, and reliability of diagnosis. Such approaches are based on extracting some representative features from the images. This work explores the selection of the most distinctive features for HEp-2 cell images using various feature selection (FS methods. Considering that there is no single universally optimal feature selection technique, we also propose hybridization of one class of FS methods (filter methods. Furthermore, the notion of variable importance for ranking features, provided by another type of approaches (embedded methods such as Random forest, Random uniform forest is exploited to select a good subset of features from a large set, such that addition of new features does not increase classification accuracy. In this work, we have also, with great consideration, designed class-specific features to capture morphological visual traits of the cell patterns. We perform various experiments and discussions to demonstrate the effectiveness of FS methods along with proposed and a standard feature set. We achieve state-of-the-art performance even with small number of features, obtained after the feature selection.

  19. Distance-Based Image Classification: Generalizing to New Classes at Near Zero Cost

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G.

    2013-01-01

    We study large-scale image classification methods that can incorporate new classes and training images continuously over time at negligible cost. To this end, we consider two distance-based classifiers, the k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers, and introduce a new

  20. Tree mortality based fire severity classification for forest inventories: A Pacific Northwest national forests example

    Science.gov (United States)

    Thomas R. Whittier; Andrew N. Gray

    2016-01-01

    Determining how the frequency, severity, and extent of forest fires are changing in response to changes in management and climate is a key concern in many regions where fire is an important natural disturbance. In the USA the only national-scale fire severity classification uses satellite image changedetection to produce maps for large (>400 ha) fires, and is...

  1. Classification of Diabetic Macular Edema and Its Stages Using Color Fundus Image

    Institute of Scientific and Technical Information of China (English)

    Muhammad Zubair; Shoab A. Khan; Ubaid Ullah Yasin

    2014-01-01

    Diabetic macular edema (DME) is a retinal thickening involving the center of the macula. It is one of the serious eye diseases which affects the central vision and can lead to partial or even complete visual loss. The only cure is timely diagnosis, prevention, and treatment of the disease. This paper presents an automated system for the diagnosis and classification of DME using color fundus image. In the proposed technique, first the optic disc is removed by applying some preprocessing steps. The preprocessed image is then passed through a classifier for segmentation of the image to detect exudates. The classifier uses dynamic thresholding technique by using some input parameters of the image. The stage classification is done on the basis of anearly treatment diabetic retinopathy study (ETDRS) given criteria to assess the severity of disease. The proposed technique gives a sensitivity, specificity, and accuracy of 98.27%, 96.58%, and 96.54%, respectively on publically available database.

  2. Using Satellite Images for Wireless Network Planing in Baku City

    Science.gov (United States)

    Gojamanov, M.; Ismayilov, J.

    2013-04-01

    It is a well known fact that the Information-Telecommunication and Space research technologies are the fields getting much more benefits from the achievements of the scientific and technical progress. In many cases, these areas supporting each other have improved the conditions for their further development. For instance, the intensive development in the field of the mobile communication has caused the rapid progress of the Space research technologies and vice versa.Today it is impossible to solve one of the most important tasks of the mobile communication as Radio Frecance planning without the 2D and 3D digital maps. The compiling of such maps is much more efficient by means of the space images. Because the quality of the space images has been improved and developed, especially at the both spectral and spatial resolution points. It has been possible to to use 8 Band images with the spatial resolution of 50 sm. At present, in relation to the function 3G of mobile communications one of the main issues facing mobile operator companies is a high-precision 3D digital maps. It should be noted that the number of mobile phone users in the Republic of Azerbaijan went forward other Community of Independent States Countries. Of course, using of aerial images for 3D mapping would be optimal. However, depending on a number of technical and administrative problems aerial photography cannot be used. Therefore, the experience of many countries shows that it will be more effective to use the space images with the higher resolution for these issues. Concerning the fact that the mobile communication within the city of Baku has included 3G function there were ordered stereo images wih the spatial resolution of 50 cm for the 150 sq.km territory occupying the central part of the city in order to compile 3D digital maps. The images collected from the WorldView-2 satellite are 4-Band Bundle(Pan+MS1) stereo images. Such kind of imagery enable to automatically classificate some required

  3. An assessment of commonly employed satellite-based remote sensors for mapping mangrove species in Mexico using an NDVI-based classification scheme.

    Science.gov (United States)

    Valderrama-Landeros, L; Flores-de-Santiago, F; Kovacs, J M; Flores-Verdugo, F

    2017-12-14

    Optimizing the classification accuracy of a mangrove forest is of utmost importance for conservation practitioners. Mangrove forest mapping using satellite-based remote sensing techniques is by far the most common method of classification currently used given the logistical difficulties of field endeavors in these forested wetlands. However, there is now an abundance of options from which to choose in regards to satellite sensors, which has led to substantially different estimations of mangrove forest location and extent with particular concern for degraded systems. The objective of this study was to assess the accuracy of mangrove forest classification using different remotely sensed data sources (i.e., Landsat-8, SPOT-5, Sentinel-2, and WorldView-2) for a system located along the Pacific coast of Mexico. Specifically, we examined a stressed semiarid mangrove forest which offers a variety of conditions such as dead areas, degraded stands, healthy mangroves, and very dense mangrove island formations. The results indicated that Landsat-8 (30 m per pixel) had  the lowest overall accuracy at 64% and that WorldView-2 (1.6 m per pixel) had the highest at 93%. Moreover, the SPOT-5 and the Sentinel-2 classifications (10 m per pixel) were very similar having accuracies of 75 and 78%, respectively. In comparison to WorldView-2, the other sensors overestimated the extent of Laguncularia racemosa and underestimated the extent of Rhizophora mangle. When considering such type of sensors, the higher spatial resolution can be particularly important in mapping small mangrove islands that often occur in degraded mangrove systems.

  4. Automatic Blocked Roads Assessment after Earthquake Using High Resolution Satellite Imagery

    Science.gov (United States)

    Rastiveis, H.; Hosseini-Zirdoo, E.; Eslamizade, F.

    2015-12-01

    In 2010, an earthquake in the city of Port-au-Prince, Haiti, happened quite by chance an accident and killed over 300000 people. According to historical data such an earthquake has not occurred in the area. Unpredictability of earthquakes has necessitated the need for comprehensive mitigation efforts to minimize deaths and injuries. Blocked roads, caused by debris of destroyed buildings, may increase the difficulty of rescue activities. In this case, a damage map, which specifies blocked and unblocked roads, can be definitely helpful for a rescue team. In this paper, a novel method for providing destruction map based on pre-event vector map and high resolution world view II satellite images after earthquake, is presented. For this purpose, firstly in pre-processing step, image quality improvement and co-coordination of image and map are performed. Then, after extraction of texture descriptor from the image after quake and SVM classification, different terrains are detected in the image. Finally, considering the classification results, specifically objects belong to "debris" class, damage analysis are performed to estimate the damage percentage. In this case, in addition to the area objects in the "debris" class their shape should also be counted. The aforementioned process are performed on all the roads in the road layer.In this research, pre-event digital vector map and post-event high resolution satellite image, acquired by Worldview-2, of the city of Port-au-Prince, Haiti's capital, were used to evaluate the proposed method. The algorithm was executed on 1200×800 m2 of the data set, including 60 roads, and all the roads were labelled correctly. The visual examination have authenticated the abilities of this method for damage assessment of urban roads network after an earthquake.

  5. Deep machine learning based Image classification in hard disk drive manufacturing (Conference Presentation)

    Science.gov (United States)

    Rana, Narender; Chien, Chester

    2018-03-01

    A key sensor element in a Hard Disk Drive (HDD) is the read-write head device. The device is complex 3D shape and its fabrication requires over thousand process steps with many of them being various types of image inspection and critical dimension (CD) metrology steps. In order to have high yield of devices across a wafer, very tight inspection and metrology specifications are implemented. Many images are collected on a wafer and inspected for various types of defects and in CD metrology the quality of image impacts the CD measurements. Metrology noise need to be minimized in CD metrology to get better estimate of the process related variations for implementing robust process controls. Though there are specialized tools available for defect inspection and review allowing classification and statistics. However, due to unavailability of such advanced tools or other reasons, many times images need to be manually inspected. SEM Image inspection and CD-SEM metrology tools are different tools differing in software as well. SEM Image inspection and CD-SEM metrology tools are separate tools differing in software and purpose. There have been cases where a significant numbers of CD-SEM images are blurred or have some artefact and there is a need for image inspection along with the CD measurement. Tool may not report a practical metric highlighting the quality of image. Not filtering CD from these blurred images will add metrology noise to the CD measurement. An image classifier can be helpful here for filtering such data. This paper presents the use of artificial intelligence in classifying the SEM images. Deep machine learning is used to train a neural network which is then used to classify the new images as blurred and not blurred. Figure 1 shows the image blur artefact and contingency table of classification results from the trained deep neural network. Prediction accuracy of 94.9 % was achieved in the first model. Paper covers other such applications of the deep neural

  6. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  7. Correlation of bone quality in radiographic images with clinical bone quality classification

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun Woo; Huh, Kyung Hoe; Kim, Jeong Hwa; Yi, Won Jin; Heo, Min Suk; Lee, Sam Sun; Choi, Soon Chul [Seoul National University, Seoul (Korea, Republic of); Park, Kwan Soo [Inje University, Seoul (Korea, Republic of)

    2006-03-15

    To investigate the validity of digital image processing on panoramic radiographs in estimating bone quality before endosseous dental implant installation by correlating bone quality in radiographic images with clinical bone quality classification. An experienced surgeon assessed and classified bone quality for implant sites with tactile sensation at the time of implant placement. Including fractal dimension eighteen morphologic features of trabecular pattern were examined in each anatomical sites on panoramic radiographs. Finally bone quality of 67 implant sites were evaluated in 42 patients. Pearson correlation analysis showed that three morphologic parameters had weak linear negative correlation with clinical bone quality classification showing correlation coefficients of -0.276, -0.280, and -0.289, respectively (p<0.05). And other three morphologic parameters had obvious linear negative correlation with clinical bone quality classification showing correlation coefficients of -0.346, -0.488, and -0.343 respectively (p<0.05). Fractal dimension also had a linear correlating with clinical bone quality classification with correlation coefficients -0.506 significantly (P<0.05). This study suggests that fractal and morphometric analysis using digital panoramic radiographs can be used to evaluate bone quality for implant recipient sites.

  8. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    Science.gov (United States)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  9. Use of high resolution satellite images for monitoring of earthquakes and volcano activity.

    Science.gov (United States)

    Arellano-Baeza, Alonso A.

    Our studies have shown that the strain energy accumulation deep in the Earth's crust that precedes a strong earthquake can be detected by applying a lineament extraction technique to the high-resolution multispectral satellite images. A lineament is a straight or a somewhat curved feature in a satellite image, which it is possible to detect by a special processing of images based on directional filtering and or Hough transform. We analyzed tens of earthquakes occurred in the Pacific coast of the South America with the Richter scale magnitude ˜4.5, using ASTER/TERRA multispectral satellite images for detection and analysis of changes in the system of lineaments previous to a strong earthquake. All events were located in the regions with small seasonal variations and limited vegetation to facilitate the tracking of features associated with the seismic activity only. It was found that the number and orientation of lineaments changed significantly about one month before an earthquake approximately, and a few months later the system returns to its initial state. This effect increases with the earthquake magnitude. It also was shown that the behavior of lineaments associated to the volcano seismic activity is opposite to that obtained previously for earthquakes. This discrepancy can be explained assuming that in the last case the main reason of earthquakes is compression and accumulation of strength in the Earth's crust due to subduction of tectonic plates, whereas in the first case we deal with the inflation of a volcano edifice due to elevation of pressure and magma intrusion. The results obtained made it possible to include this research as a part of scientific program of Chilean Remote Sensing Satellite mission to be launched in 2010.

  10. Chagas disease study using satellite image processing: A Bolivian case

    Science.gov (United States)

    Vargas-Cuentas, Natalia I.; Roman-Gonzalez, Avid; Mantari, Alicia Alva; Muñoz, Luis AnthonyAucapuma

    2018-03-01

    Remote sensing is the technology that has enabled us to obtain information about the Earth's surface without directly contacting it. For this reason, currently, the Bolivian state has considered a list of interesting applications of remote sensing in the country, including the following: biodiversity and environment monitoring, mining and geology, epidemiology, agriculture, water resources and land use planning. The use of satellite images has become a great tool for epidemiology because with this technological advance we can determine the environment in which transmission occurs, the distribution of the disease and its evolution over time. In that context, one of the important diseases related to public health in Bolivia is Chagas disease, also known as South American Trypanosomiasis. Chagas is caused by a blood-sucking bug or Vinchuca, which causes serious intestinal and heart long term problems and affects 33.4% of the Bolivian population. This disease affects mostly humble people, so the Bolivian state invests millions of dollars to acquire medicine and distribute it for free. Due to the above reasons, the present research aims to analyze some areas of Bolivia using satellite images for developing an epidemiology study. The primary objective is to understand the environment in which the transmission of the disease happens, and the climatic conditions under which occurs, observe the behavior of the blood-sucking bug, identify in which months occur higher outbreaks, in which months the bug leaves its eggs, and under which weather conditions this happens. All this information would be contrasted with information extracted from the satellite images and data from the Ministry of Health, and the Institute of Meteorology in Bolivia. All this data will allow us to have a more integrated understanding of this disease and promote new possibilities to prevent and control it.

  11. A psychophysical imaging method evidencing auditory cue extraction during speech perception: a group analysis of auditory classification images.

    Science.gov (United States)

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.

  12. Segmentation and classification of cell cycle phases in fluorescence imaging.

    Science.gov (United States)

    Ersoy, Ilker; Bunyak, Filiz; Chagin, Vadim; Cardoso, M Christina; Palaniappan, Kannappan

    2009-01-01

    Current chemical biology methods for studying spatiotemporal correlation between biochemical networks and cell cycle phase progression in live-cells typically use fluorescence-based imaging of fusion proteins. Stable cell lines expressing fluorescently tagged protein GFP-PCNA produce rich, dynamically varying sub-cellular foci patterns characterizing the cell cycle phases, including the progress during the S-phase. Variable fluorescence patterns, drastic changes in SNR, shape and position changes and abundance of touching cells require sophisticated algorithms for reliable automatic segmentation and cell cycle classification. We extend the recently proposed graph partitioning active contours (GPAC) for fluorescence-based nucleus segmentation using regional density functions and dramatically improve its efficiency, making it scalable for high content microscopy imaging. We utilize surface shape properties of GFP-PCNA intensity field to obtain descriptors of foci patterns and perform automated cell cycle phase classification, and give quantitative performance by comparing our results to manually labeled data.

  13. Assessment of Sampling Approaches for Remote Sensing Image Classification in the Iranian Playa Margins

    Science.gov (United States)

    Kazem Alavipanah, Seyed

    There are some problems in soil salinity studies based upon remotely sensed data: 1-spectral world is full of ambiguity and therefore soil reflectance can not be attributed to a single soil property such as salinity, 2) soil surface conditions as a function of time and space is a complex phenomena, 3) vegetation with a dynamic biological nature may create some problems in the study of soil salinity. Due to these problems the first question which may arise is how to overcome or minimise these problems. In this study we hypothesised that different sources of data, well established sampling plan and optimum approach could be useful. In order to choose representative training sites in the Iranian playa margins, to define the spectral and informational classes and to overcome some problems encountered in the variation within the field, the following attempts were made: 1) Principal Component Analysis (PCA) in order: a) to determine the most important variables, b) to understand the Landsat satellite images and the most informative components, 2) the photomorphic unit (PMU) consideration and interpretation; 3) study of salt accumulation and salt distribution in the soil profile, 4) use of several forms of field data, such as geologic, geomorphologic and soil information; 6) confirmation of field data and land cover types with farmers and the members of the team. The results led us to find at suitable approaches with a high and acceptable image classification accuracy and image interpretation. KEY WORDS; Photo Morphic Unit, Pprincipal Ccomponent Analysis, Soil Salinity, Field Work, Remote Sensing

  14. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    Science.gov (United States)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  15. Early differential processing of material images: Evidence from ERP classification.

    Science.gov (United States)

    Wiebel, Christiane B; Valsecchi, Matteo; Gegenfurtner, Karl R

    2014-06-24

    Investigating the temporal dynamics of natural image processing using event-related potentials (ERPs) has a long tradition in object recognition research. In a classical Go-NoGo task two characteristic effects have been emphasized: an early task independent category effect and a later task-dependent target effect. Here, we set out to use this well-established Go-NoGo paradigm to study the time course of material categorization. Material perception has gained more and more interest over the years as its importance in natural viewing conditions has been ignored for a long time. In addition to analyzing standard ERPs, we conducted a single trial ERP pattern analysis. To validate this procedure, we also measured ERPs in two object categories (people and animals). Our linear classification procedure was able to largely capture the overall pattern of results from the canonical analysis of the ERPs and even extend it. We replicate the known target effect (differential Go-NoGo potential at frontal sites) for the material images. Furthermore, we observe task-independent differential activity between the two material categories as early as 140 ms after stimulus onset. Using our linear classification approach, we show that material categories can be differentiated consistently based on the ERP pattern in single trials around 100 ms after stimulus onset, independent of the target-related status. This strengthens the idea of early differential visual processing of material categories independent of the task, probably due to differences in low-level image properties and suggests pattern classification of ERP topographies as a strong instrument for investigating electrophysiological brain activity. © 2014 ARVO.

  16. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    Science.gov (United States)

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The Application of Chinese High-Spatial Remote Sensing Satellite Image in Land Law Enforcement Information Extraction

    Science.gov (United States)

    Wang, N.; Yang, R.

    2018-04-01

    Chinese high -resolution (HR) remote sensing satellites have made huge leap in the past decade. Commercial satellite datasets, such as GF-1, GF-2 and ZY-3 images, the panchromatic images (PAN) resolution of them are 2 m, 1 m and 2.1 m and the multispectral images (MS) resolution are 8 m, 4 m, 5.8 m respectively have been emerged in recent years. Chinese HR satellite imagery has been free downloaded for public welfare purposes using. Local government began to employ more professional technician to improve traditional land management technology. This paper focused on analysing the actual requirements of the applications in government land law enforcement in Guangxi Autonomous Region. 66 counties in Guangxi Autonomous Region were selected for illegal land utilization spot extraction with fusion Chinese HR images. The procedure contains: A. Defines illegal land utilization spot type. B. Data collection, GF-1, GF-2, and ZY-3 datasets were acquired in the first half year of 2016 and other auxiliary data were collected in 2015. C. Batch process, HR images were collected for batch preprocessing through ENVI/IDL tool. D. Illegal land utilization spot extraction by visual interpretation. E. Obtaining attribute data with ArcGIS Geoprocessor (GP) model. F. Thematic mapping and surveying. Through analysing 42 counties results, law enforcement officials found 1092 illegal land using spots and 16 suspicious illegal mining spots. The results show that Chinese HR satellite images have great potential for feature information extraction and the processing procedure appears robust.

  18. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  19. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    Science.gov (United States)

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  20. Automatic segmentation and disease classification using cardiac cine MR images

    NARCIS (Netherlands)

    Wolterink, Jelmer M.; Leiner, Tim; Viergever, Max A.; Išgum, Ivana

    2018-01-01

    Segmentation of the heart in cardiac cine MR is clinically used to quantify cardiac function. We propose a fully automatic method for segmentation and disease classification using cardiac cine MR images. A convolutional neural network (CNN) was designed to simultaneously segment the left ventricle

  1. Data mining and model adaptation for the land use and land cover classification of a Worldview 2 image

    Science.gov (United States)

    Nascimento, L. C.; Cruz, C. B. M.; Souza, E. M. F. R.

    2013-10-01

    Forest fragmentation studies have increased since the last 3 decades. Land use and land cover maps (LULC) are important tools for this analysis, as well as other remote sensing techniques. The object oriented analysis classifies the image according to patterns as texture, color, shape, and context. However, there are many attributes to be analyzed, and data mining tools helped us to learn about them and to choose the best ones. In this way, the aim of this paper is to describe data mining techniques and results of a heterogeneous area, as the municipality of Silva Jardim, Rio de Janeiro, Brazil. The municipality has forest, urban areas, pastures, water bodies, agriculture and also some shadows as objects to be represented. Worldview 2 satellite image from 2010 was used and LULC classification was processed using the values that data mining software has provided according to the J48 method. Afterwards, this classification was analyzed, and the verification was made by the confusion matrix, being possible to evaluate the accuracy (58,89%). The best results were in classes "water" and "forest" which have more homogenous reflectance. Because of that, the model has been adapted, in order to create a model for the most homogeneous classes. As result, 2 new classes were created, some values and some attributes changed, and others added. In the end, the accuracy was 89,33%. It is important to highlight this is not a conclusive paper; there are still many steps to develop in highly heterogeneous surfaces.

  2. Image-based fall detection and classification of a user with a walking support system

    Science.gov (United States)

    Taghvaei, Sajjad; Kosuge, Kazuhiro

    2017-10-01

    The classification of visual human action is important in the development of systems that interact with humans. This study investigates an image-based classification of the human state while using a walking support system to improve the safety and dependability of these systems.We categorize the possible human behavior while utilizing a walker robot into eight states (i.e., sitting, standing, walking, and five falling types), and propose two different methods, namely, normal distribution and hidden Markov models (HMMs), to detect and recognize these states. The visual feature for the state classification is the centroid position of the upper body, which is extracted from the user's depth images. The first method shows that the centroid position follows a normal distribution while walking, which can be adopted to detect any non-walking state. The second method implements HMMs to detect and recognize these states. We then measure and compare the performance of both methods. The classification results are employed to control the motion of a passive-type walker (called "RT Walker") by activating its brakes in non-walking states. Thus, the system can be used for sit/stand support and fall prevention. The experiments are performed with four subjects, including an experienced physiotherapist. Results show that the algorithm can be adapted to the new user's motion pattern within 40 s, with a fall detection rate of 96.25% and state classification rate of 81.0%. The proposed method can be implemented to other abnormality detection/classification applications that employ depth image-sensing devices.

  3. Median Filter Noise Reduction of Image and Backpropagation Neural Network Model for Cervical Cancer Classification

    Science.gov (United States)

    Wutsqa, D. U.; Marwah, M.

    2017-06-01

    In this paper, we consider spatial operation median filter to reduce the noise in the cervical images yielded by colposcopy tool. The backpropagation neural network (BPNN) model is applied to the colposcopy images to classify cervical cancer. The classification process requires an image extraction by using a gray level co-occurrence matrix (GLCM) method to obtain image features that are used as inputs of BPNN model. The advantage of noise reduction is evaluated by comparing the performances of BPNN models with and without spatial operation median filter. The experimental result shows that the spatial operation median filter can improve the accuracy of the BPNN model for cervical cancer classification.

  4. Monitoring mangrove forests after aquaculture abandonment using time series of very high spatial resolution satellite images: A case study from the Perancak estuary, Bali, Indonesia.

    Science.gov (United States)

    Proisy, Christophe; Viennois, Gaëlle; Sidik, Frida; Andayani, Ariani; Enright, James Anthony; Guitet, Stéphane; Gusmawati, Niken; Lemonnier, Hugues; Muthusankar, Gowrappan; Olagoke, Adewole; Prosperi, Juliana; Rahmania, Rinny; Ricout, Anaïs; Soulard, Benoit; Suhardjono

    2018-06-01

    Revegetation of abandoned aquaculture regions should be a priority for any integrated coastal zone management (ICZM). This paper examines the potential of a matchless time series of 20 very high spatial resolution (VHSR) optical satellite images acquired for mapping trends in the evolution of mangrove forests from 2001 to 2015 in an estuary fragmented into aquaculture ponds. Evolution of mangrove extent was quantified through robust multitemporal analysis based on supervised image classification. Results indicated that mangroves are expanding inside and outside ponds and over pond dykes. However, the yearly expansion rate of vegetation cover greatly varied between replanted ponds. Ground truthing showed that only Rhizophora species had been planted, whereas natural mangroves consist of Avicennia and Sonneratia species. In addition, the dense Rhizophora plantations present very low regeneration capabilities compared with natural mangroves. Time series of VHSR images provide comprehensive and intuitive level of information for the support of ICZM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Page Layout Analysis of the Document Image Based on the Region Classification in a Decision Hierarchical Structure

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2010-10-01

    Full Text Available The conversion of document image to its electronic version is a very important problem in the saving, searching and retrieval application in the official automation system. For this purpose, analysis of the document image is necessary. In this paper, a hierarchical classification structure based on a two-stage segmentation algorithm is proposed. In this structure, image is segmented using the proposed two-stage segmentation algorithm. Then, the type of the image regions such as document and non-document image is determined using multiple classifiers in the hierarchical classification structure. The proposed segmentation algorithm uses two algorithms based on wavelet transform and thresholding. Texture features such as correlation, homogeneity and entropy that extracted from co-occurrenc matrix and also two new features based on wavelet transform are used to classifiy and lable the regions of the image. The hierarchical classifier is consisted of two Multilayer Perceptron (MLP classifiers and a Support Vector Machine (SVM classifier. The proposed algorithm is evaluated on a database consisting of document and non-document images that provides from Internet. The experimental results show the efficiency of the proposed approach in the region segmentation and classification. The proposed algorithm provides accuracy rate of 97.5% on classification of the regions.

  6. Mutual information registration of multi-spectral and multi-resolution images of DigitalGlobe's WorldView-3 imaging satellite

    Science.gov (United States)

    Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio

    2017-05-01

    WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.

  7. Towards an efficient and robust foot classification from pedobarographic images

    OpenAIRE

    Oliveira, Francisco; Sousa, Andreia S. P.; Santos, Rubim; Tavares, João Manuel

    2012-01-01

    O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor). This paper presents a new computational framework for automatic foot classification from digital plantar pressure images. It classifies the foot as left or right and simultaneously calculates two well-known footprint indices: the Cavanagh's arch index and the modified arch index. The accuracy of the framework was evaluated using a set of plantar pressure images from two common pedobarographic devices. The...

  8. Malignant fatty tumors: classification, clinical course, imaging appearance and treatment

    International Nuclear Information System (INIS)

    Peterson, J.J.; Kransdorf, M.J.; Bancroft, L.W.; O'Connor, M.I.

    2003-01-01

    Liposarcoma is a relatively common soft tissue malignancy with a wide spectrum of clinical presentations and imaging appearances. Several subtypes are described, ranging from lesions nearly entirely composed of mature adipose tissue, to tumors with very sparse adipose elements. The imaging appearance of these fatty masses is frequently sufficiently characteristic to allow a specific diagnosis, while in other cases, although a specific diagnosis is not achievable, a meaningful limited differential diagnosis can be established. The purpose of this paper is to review the spectrum of malignant fatty tumors, highlighting the current classification system, clinical presentation and behavior, treatment and spectrum of imaging appearances. The imaging review will emphasize CT scanning and MR imaging, and will stress differentiating radiologic features. (orig.)

  9. Benign fatty tumors: classification, clinical course, imaging appearance, and treatment

    International Nuclear Information System (INIS)

    Bancroft, Laura W.; Kransdorf, Mark J.; Peterson, Jeffrey J.; O'Connor, Mary I.

    2006-01-01

    Lipoma is the most common soft-tissue tumor, with a wide spectrum of clinical presentations and imaging appearances. Several subtypes are described, ranging from lesions entirely composed of mature adipose tissue to tumors intimately associated with nonadipose tissue, to those composed of brown fat. The imaging appearance of these fatty masses is frequently sufficiently characteristic to allow a specific diagnosis. However, in other cases, although a specific diagnosis is not achievable, a meaningful limited differential diagnosis can be established. The purpose of this manuscript is to review the spectrum of benign fatty tumors highlighting the current classification system, clinical presentation and behavior, spectrum of imaging appearances, and treatment. The imaging review emphasizes computed tomography (CT) scanning and magnetic resonance (MR) imaging, differentiating radiologic features. (orig.)

  10. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    Science.gov (United States)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  11. Training Small Networks for Scene Classification of Remote Sensing Images via Knowledge Distillation

    Directory of Open Access Journals (Sweden)

    Guanzhou Chen

    2018-05-01

    Full Text Available Scene classification, aiming to identify the land-cover categories of remotely sensed image patches, is now a fundamental task in the remote sensing image analysis field. Deep-learning-model-based algorithms are widely applied in scene classification and achieve remarkable performance, but these high-level methods are computationally expensive and time-consuming. Consequently in this paper, we introduce a knowledge distillation framework, currently a mainstream model compression method, into remote sensing scene classification to improve the performance of smaller and shallower network models. Our knowledge distillation training method makes the high-temperature softmax output of a small and shallow student model match the large and deep teacher model. In our experiments, we evaluate knowledge distillation training method for remote sensing scene classification on four public datasets: AID dataset, UCMerced dataset, NWPU-RESISC dataset, and EuroSAT dataset. Results show that our proposed training method was effective and increased overall accuracy (3% in AID experiments, 5% in UCMerced experiments, 1% in NWPU-RESISC and EuroSAT experiments for small and shallow models. We further explored the performance of the student model on small and unbalanced datasets. Our findings indicate that knowledge distillation can improve the performance of small network models on datasets with lower spatial resolution images, numerous categories, as well as fewer training samples.

  12. Evaluation of classification method of lung lobe for multi-slice CT images

    International Nuclear Information System (INIS)

    Sakurai, Kousuke; Matsuhiro, Mikio; Saita, Shinsuke

    2010-01-01

    Recently, due to the introduction of multi-slice CT, to obtain a high resolution 3D CT image is possible in a short time. The temporal and spatial resolutions are high, so a highly accurate 3D image analysis is possible. To develop a structure analysis of the lung is needed and to be used as a fundamental technology for early detection of the disease. By separating the lung into lung lobes may provide important information for analysis, diagnosis and treatment of lung diseases. Therefore in this report, we adapt to abnormality example with the classification algorithms using the anatomical information of the bronchus, the pulmonary vein and interlobar fissure information, we evaluate the classification. (author)

  13. MRI Brain Images Healthy and Pathological Tissues Classification with the Aid of Improved Particle Swarm Optimization and Neural Network

    Science.gov (United States)

    Sheejakumari, V.; Sankara Gomathi, B.

    2015-01-01

    The advantages of magnetic resonance imaging (MRI) over other diagnostic imaging modalities are its higher spatial resolution and its better discrimination of soft tissue. In the previous tissues classification method, the healthy and pathological tissues are classified from the MRI brain images using HGANN. But the method lacks sensitivity and accuracy measures. The classification method is inadequate in its performance in terms of these two parameters. So, to avoid these drawbacks, a new classification method is proposed in this paper. Here, new tissues classification method is proposed with improved particle swarm optimization (IPSO) technique to classify the healthy and pathological tissues from the given MRI images. Our proposed classification method includes the same four stages, namely, tissue segmentation, feature extraction, heuristic feature selection, and tissue classification. The method is implemented and the results are analyzed in terms of various statistical performance measures. The results show the effectiveness of the proposed classification method in classifying the tissues and the achieved improvement in sensitivity and accuracy measures. Furthermore, the performance of the proposed technique is evaluated by comparing it with the other segmentation methods. PMID:25977706

  14. Classification of Hyperspectral or Trichromatic Measurements of Ocean Color Data into Spectral Classes

    Directory of Open Access Journals (Sweden)

    Dilip K. Prasad

    2016-03-01

    Full Text Available We propose a method for classifying radiometric oceanic color data measured by hyperspectral satellite sensors into known spectral classes, irrespective of the downwelling irradiance of the particular day, i.e., the illumination conditions. The focus is not on retrieving the inherent optical properties but to classify the pixels according to the known spectral classes of the reflectances from the ocean. The method compensates for the unknown downwelling irradiance by white balancing the radiometric data at the ocean pixels using the radiometric data of bright pixels (typically from clouds. The white-balanced data is compared with the entries in a pre-calibrated lookup table in which each entry represents the spectral properties of one class. The proposed approach is tested on two datasets of in situ measurements and 26 different daylight illumination spectra for medium resolution imaging spectrometer (MERIS, moderate-resolution imaging spectroradiometer (MODIS, sea-viewing wide field-of-view sensor (SeaWiFS, coastal zone color scanner (CZCS, ocean and land colour instrument (OLCI, and visible infrared imaging radiometer suite (VIIRS sensors. Results are also shown for CIMEL’s SeaPRISM sun photometer sensor used on-board field trips. Accuracy of more than 92% is observed on the validation dataset and more than 86% is observed on the other dataset for all satellite sensors. The potential of applying the algorithms to non-satellite and non-multi-spectral sensors mountable on airborne systems is demonstrated by showing classification results for two consumer cameras. Classification on actual MERIS data is also shown. Additional results comparing the spectra of remote sensing reflectance with level 2 MERIS data and chlorophyll concentration estimates of the data are included.

  15. Three-dimensional information extraction from GaoFen-1 satellite images for landslide monitoring

    Science.gov (United States)

    Wang, Shixin; Yang, Baolin; Zhou, Yi; Wang, Futao; Zhang, Rui; Zhao, Qing

    2018-05-01

    To more efficiently use GaoFen-1 (GF-1) satellite images for landslide emergency monitoring, a Digital Surface Model (DSM) can be generated from GF-1 across-track stereo image pairs to build a terrain dataset. This study proposes a landslide 3D information extraction method based on the terrain changes of slope objects. The slope objects are mergences of segmented image objects which have similar aspects; and the terrain changes are calculated from the post-disaster Digital Elevation Model (DEM) from GF-1 and the pre-disaster DEM from GDEM V2. A high mountain landslide that occurred in Wenchuan County, Sichuan Province is used to conduct a 3D information extraction test. The extracted total area of the landslide is 22.58 ha; the displaced earth volume is 652,100 m3; and the average sliding direction is 263.83°. The accuracies of them are 0.89, 0.87 and 0.95, respectively. Thus, the proposed method expands the application of GF-1 satellite images to the field of landslide emergency monitoring.

  16. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    Science.gov (United States)

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  17. Deep learning for the detection of barchan dunes in satellite images

    Science.gov (United States)

    Azzaoui, A. M.; Adnani, M.; Elbelrhiti, H.; Chaouki, B. E. K.; Masmoudi, L.

    2017-12-01

    Barchan dunes are known to be the fastest moving sand dunes in deserts as they form under unidirectional winds and limited sand supply over a firm coherent basement (Elbelrhiti and Hargitai,2015). They were studied in the context of natural hazard monitoring as they could be a threat to human activities and infrastructures. Also, they were studied as a natural phenomenon occurring in other planetary landforms such as Mars or Venus (Bourke et al., 2010). Our region of interest was located in a desert region in the south of Morocco, in a barchan dunes corridor next to the town of Tarfaya. This region which is part of the Sahara desert contained thousands of barchans; which limits the number of dunes that could be studied during field missions. Therefore, we chose to monitor barchan dunes with satellite imagery, which can be seen as a complementary approach to field missions. We collected data from the Sentinel platform (https://scihub.copernicus.eu/dhus/); we used a machine learning method as a basis for the detection of barchan dunes positions in the satellite image. We trained a deep learning model on a mid-sized dataset that contained blocks representing images of barchan dunes, and images of other desert features, that we collected by cropping and annotating the source image. During testing, we browsed the satellite image with a gliding window that evaluated each block, and then produced a probability map. Finally, a threshold on the latter map exposed the location of barchan dunes. We used a subsample of data to train the model and we gradually incremented the size of the training set to get finer results and avoid over fitting. The positions of barchan dunes were successfully detected and deep learning was an effective method for this application. Sentinel-2 images were chosen for their availability and good temporal resolution, which will allow the tracking of barchan dunes in future work. While Sentinel images had sufficient spatial resolution for the

  18. Wind Statistics Offshore based on Satellite Images

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Mouche, Alexis; Badger, Merete

    2009-01-01

    -based observations become available. At present preliminary results are obtained using the routine methods. The first step in the process is to retrieve raw SAR data, calibrate the images and use a priori wind direction as input to the geophysical model function. From this process the wind speed maps are produced....... The wind maps are geo-referenced. The second process is the analysis of a series of geo-referenced SAR-based wind maps. Previous research has shown that a relatively large number of images are needed for achieving certain accuracies on mean wind speed, Weibull A and k (scale and shape parameters......Ocean wind maps from satellites are routinely processed both at Risø DTU and CLS based on the European Space Agency Envisat ASAR data. At Risø the a priori wind direction is taken from the atmospheric model NOGAPS (Navel Operational Global Atmospheric Prediction System) provided by the U.S. Navy...

  19. A novel underwater dam crack detection and classification approach based on sonar images.

    Science.gov (United States)

    Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min

    2017-01-01

    Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments.

  20. A novel underwater dam crack detection and classification approach based on sonar images.

    Directory of Open Access Journals (Sweden)

    Pengfei Shi

    Full Text Available Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments.

  1. Potential of Different Optical and SAR Data in Forest and Land Cover Classification to Support REDD+ MRV

    Directory of Open Access Journals (Sweden)

    Laura Sirro

    2018-06-01

    Full Text Available The applicability of optical and synthetic aperture radar (SAR data for land cover classification to support REDD+ (Reducing Emissions from Deforestation and Forest Degradation MRV (measuring, reporting and verification services was tested on a tropical to sub-tropical test site. The 100 km by 100 km test site was situated in the State of Chiapas in Mexico. Land cover classifications were computed using RapidEye and Landsat TM optical satellite images and ALOS PALSAR L-band and Envisat ASAR C-band images. Identical sample plot data from Kompsat-2 imagery of one-metre spatial resolution were used for the accuracy assessment. The overall accuracy for forest and non-forest classification varied between 95% for the RapidEye classification and 74% for the Envisat ASAR classification. For more detailed land cover classification, the accuracies varied between 89% and 70%, respectively. A combination of Landsat TM and ALOS PALSAR data sets provided only 1% improvement in the overall accuracy. The biases were small in most classifications, varying from practically zero for the Landsat TM based classification to a 7% overestimation of forest area in the Envisat ASAR classification. Considering the pros and cons of the data types, we recommend optical data of 10 m spatial resolution as the primary data source for REDD MRV purposes. The results with L-band SAR data were nearly as accurate as the optical data but considering the present maturity of the imaging systems and image analysis methods, the L-band SAR is recommended as a secondary data source. The C-band SAR clearly has poorer potential than the L-band but it is applicable in stratification for a statistical sampling when other image types are unavailable.

  2. Classification of Urban Feature from Unmanned Aerial Vehicle Images Using Gasvm Integration and Multi-Scale Segmentation

    Science.gov (United States)

    Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.

    2015-12-01

    The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.

  3. Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.

    Science.gov (United States)

    Chen, Shizhi; Yang, Xiaodong; Tian, Yingli

    2015-09-01

    A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.

  4. Agile convolutional neural network for pulmonary nodule classification using CT images.

    Science.gov (United States)

    Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei

    2018-04-01

    To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

  5. Analysis and classification of commercial ham slice images using directional fractal dimension features.

    Science.gov (United States)

    Mendoza, Fernando; Valous, Nektarios A; Allen, Paul; Kenny, Tony A; Ward, Paddy; Sun, Da-Wen

    2009-02-01

    This paper presents a novel and non-destructive approach to the appearance characterization and classification of commercial pork, turkey and chicken ham slices. Ham slice images were modelled using directional fractal (DF(0°;45°;90°;135°)) dimensions and a minimum distance classifier was adopted to perform the classification task. Also, the role of different colour spaces and the resolution level of the images on DF analysis were investigated. This approach was applied to 480 wafer thin ham slices from four types of hams (120 slices per type): i.e., pork (cooked and smoked), turkey (smoked) and chicken (roasted). DF features were extracted from digitalized intensity images in greyscale, and R, G, B, L(∗), a(∗), b(∗), H, S, and V colour components for three image resolution levels (100%, 50%, and 25%). Simulation results show that in spite of the complexity and high variability in colour and texture appearance, the modelling of ham slice images with DF dimensions allows the capture of differentiating textural features between the four commercial ham types. Independent DF features entail better discrimination than that using the average of four directions. However, DF dimensions reveal a high sensitivity to colour channel, orientation and image resolution for the fractal analysis. The classification accuracy using six DF dimension features (a(90°)(∗),a(135°)(∗),H(0°),H(45°),S(0°),H(90°)) was 93.9% for training data and 82.2% for testing data.

  6. Use of satellite images for the monitoring of water systems

    Science.gov (United States)

    Hillebrand, Gudrun; Winterscheid, Axel; Baschek, Björn; Wolf, Thomas

    2015-04-01

    Satellite images are a proven source of information for monitoring ecological indicators in coastal waters and inland river systems. This potential of remote sensing products was demonstrated by recent research projects (e.g. EU-funded project Freshmon - www.freshmon.eu) and other activities by national institutions. Among indicators for water quality, a particular focus was set on the temporal and spatial dynamics of suspended particulate matter (SPM) and Chlorophyll-a (Chl-a). The German Federal Institute of Hydrology (BfG) was using the Weser and Elbe estuaries as test cases to compare in-situ measurements with results obtained from a temporal series of automatically generated maps of SPM distributions based on remote sensing data. Maps of SPM and Chl-a distributions in European inland rivers and alpine lakes were generated by the Freshmon Project. Earth observation based products are a valuable source for additional data that can well supplement in-situ monitoring. For 2015, the BfG and the Institute for Lake Research of the State Institute for the Environment, Measurements and Nature Conservation of Baden-Wuerttemberg, Germany (LUBW) are in the process to start implementing an operational service for monitoring SPM and Chl-a based on satellite images (Landsat 7 & 8, Sentinel 2, and if required other systems with higher spatial resolution, e.g. Rapid Eye). In this 2-years project, which is part of the European Copernicus Programme, the operational service will be set up for - the inland rivers of Rhine and Elbe - the North Sea estuaries of Elbe, Weser and Ems. Furthermore - Lake Constance and other lakes located within the Federal State of Baden-Wuerttemberg. In future, the service can be implemented for other rivers and lakes as well. Key feature of the project is a data base that holds the stock of geo-referenced maps of SPM and Chl-a distributions. Via web-based portals (e.g. GGInA - geo-portal of the BfG; UIS - environmental information system of the

  7. Regional thermal patterns in Portugal using satellite images (NOAA AVHRR

    Directory of Open Access Journals (Sweden)

    António Lopes

    1995-06-01

    Full Text Available In this paper two NOAA AVHRR diurnal images (channel 4 are used to determine the required procedures aiming at a future operational analysis system in Portugal. Preprocessing and classification operations are described. Strong correlation between air and surface temperature is verified and rather detailed air temperature patterns can be inferred.

  8. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  9. Recognition and characterization of networks of water bodies in the Arctic ice-wedge polygonal tundra using high-resolution satellite imagery

    Science.gov (United States)

    Skurikhin, A. N.; Gangodagamage, C.; Rowland, J. C.; Wilson, C. J.

    2013-12-01

    identification. The approach starts by segmenting water bodies from an image, which are then categorized using shape-based classification. Segmentation uses combination of pan sharpened multispectral bands and is based on the active contours without edges technique. The segmentation is robust to noise and can detect objects with weak boundaries that is important for extraction of troughs. We then categorize the segmented regions via shape based classification. Because segmentation accuracy is the main factor impacting the quality of the shape-based classification, for segmentation accuracy assessment we created reference image using WorldView-2 satellite image of ice-wedge polygonal tundra. Reference image contained manually labelled image regions which cover components of drainage networks, such as troughs, ponds, rivers and lakes. The evaluation has shown that the approach provides a good accuracy of segmentation and reasonable classification results. The overall accuracy of the segmentation is approximately 95%, the segmentation user's and producer's accuracies are approximately 92% and 97% respectively.

  10. A psychophysical imaging method evidencing auditory cue extraction during speech perception: a group analysis of auditory classification images.

    Directory of Open Access Journals (Sweden)

    Léo Varnet

    Full Text Available Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.

  11. The 2017 Hurricane Season: A Revolution in Geostationary Weather Satellite Imaging and Data Processing

    Science.gov (United States)

    Weiner, A. M.; Gundy, J.; Brown-Bertold, B.; Yates, H.; Dobler, J. T.

    2017-12-01

    Since their introduction, geostationary weather satellites have enabled us to track hurricane life-cycle movement from development to dissipation. During the 2017 hurricane season, the new GOES-16 geostationary satellite demonstrated just how far we have progressed technologically in geostationary satellite imaging, with hurricane imagery showing never-before-seen detail of the hurricane eye and eyewall structure and life cycle. In addition, new ground system technology, leveraging high-performance computing, delivered imagery and data to forecasters with unprecedented speed—and with updates as often as every 30 seconds. As additional satellites and new products become operational, forecasters will be able to track hurricanes with even greater accuracy and assist in aftermath evaluations. This presentation will present glimpses into the past, a look at the present, and a prediction for the future utilization of geostationary satellites with respect to all facets of hurricane support.

  12. A Method of Particle Swarm Optimized SVM Hyper-spectral Remote Sensing Image Classification

    International Nuclear Information System (INIS)

    Liu, Q J; Jing, L H; Wang, L M; Lin, Q Z

    2014-01-01

    Support Vector Machine (SVM) has been proved to be suitable for classification of remote sensing image and proposed to overcome the Hughes phenomenon. Hyper-spectral sensors are intrinsically designed to discriminate among a broad range of land cover classes which may lead to high computational time in SVM mutil-class algorithms. Model selection for SVM involving kernel and the margin parameter values selection which is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyper-spectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, particle swarm algorithm is introduced to the optimal selection of SVM (PSSVM) kernel parameter σ and margin parameter C to improve the modelling efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for evaluating the novel PSSVM, as well as traditional SVM classifier with general Grid-Search cross-validation method (GSSVM). And then, evaluation indexes including SVM model training time, classification Overall Accuracy (OA) and Kappa index of both PSSVM and GSSVM are all analyzed quantitatively. It is demonstrated that OA of PSSVM on test samples and whole image are 85% and 82%, the differences with that of GSSVM are both within 0.08% respectively. And Kappa indexes reach 0.82 and 0.77, the differences with that of GSSVM are both within 0.001. While the modelling time of PSSVM can be only 1/10 of that of GSSVM, and the modelling. Therefore, PSSVM is an fast and accurate algorithm for hyper-spectral image classification and is superior to GSSVM

  13. USING SATELLITE IMAGES FOR WIRELESS NETWORK PLANING IN BAKU CITY

    Directory of Open Access Journals (Sweden)

    M. Gojamanov

    2013-04-01

    Full Text Available It is a well known fact that the Information-Telecommunication and Space research technologies are the fields getting much more benefits from the achievements of the scientific and technical progress. In many cases, these areas supporting each other have improved the conditions for their further development. For instance, the intensive development in the field of the mobile communication has caused the rapid progress of the Space research technologies and vice versa.Today it is impossible to solve one of the most important tasks of the mobile communication as Radio Frecance planning without the 2D and 3D digital maps. The compiling of such maps is much more efficient by means of the space images. Because the quality of the space images has been improved and developed, especially at the both spectral and spatial resolution points. It has been possible to to use 8 Band images with the spatial resolution of 50 sm. At present, in relation to the function 3G of mobile communications one of the main issues facing mobile operator companies is a high-precision 3D digital maps. It should be noted that the number of mobile phone users in the Republic of Azerbaijan went forward other Community of Independent States Countries. Of course, using of aerial images for 3D mapping would be optimal. However, depending on a number of technical and administrative problems aerial photography cannot be used. Therefore, the experience of many countries shows that it will be more effective to use the space images with the higher resolution for these issues. Concerning the fact that the mobile communication within the city of Baku has included 3G function there were ordered stereo images wih the spatial resolution of 50 cm for the 150 sq.km territory occupying the central part of the city in order to compile 3D digital maps. The images collected from the WorldView-2 satellite are 4-Band Bundle(Pan+MS1 stereo images. Such kind of imagery enable to automatically

  14. Object-Based Classification as an Alternative Approach to the Traditional Pixel-Based Classification to Identify Potential Habitat of the Grasshopper Sparrow

    Science.gov (United States)

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  15. A comparative study of deep learning models for medical image classification

    Science.gov (United States)

    Dutta, Suvajit; Manideep, B. C. S.; Rai, Shalva; Vijayarajan, V.

    2017-11-01

    Deep Learning(DL) techniques are conquering over the prevailing traditional approaches of neural network, when it comes to the huge amount of dataset, applications requiring complex functions demanding increase accuracy with lower time complexities. Neurosciences has already exploited DL techniques, thus portrayed itself as an inspirational source for researchers exploring the domain of Machine learning. DL enthusiasts cover the areas of vision, speech recognition, motion planning and NLP as well, moving back and forth among fields. This concerns with building models that can successfully solve variety of tasks requiring intelligence and distributed representation. The accessibility to faster CPUs, introduction of GPUs-performing complex vector and matrix computations, supported agile connectivity to network. Enhanced software infrastructures for distributed computing worked in strengthening the thought that made researchers suffice DL methodologies. The paper emphases on the following DL procedures to traditional approaches which are performed manually for classifying medical images. The medical images are used for the study Diabetic Retinopathy(DR) and computed tomography (CT) emphysema data. Both DR and CT data diagnosis is difficult task for normal image classification methods. The initial work was carried out with basic image processing along with K-means clustering for identification of image severity levels. After determining image severity levels ANN has been applied on the data to get the basic classification result, then it is compared with the result of DNNs (Deep Neural Networks), which performed efficiently because of its multiple hidden layer features basically which increases accuracy factors, but the problem of vanishing gradient in DNNs made to consider Convolution Neural Networks (CNNs) as well for better results. The CNNs are found to be providing better outcomes when compared to other learning models aimed at classification of images. CNNs are

  16. Medical X-ray Image Hierarchical Classification Using a Merging and Splitting Scheme in Feature Space.

    Science.gov (United States)

    Fesharaki, Nooshin Jafari; Pourghassem, Hossein

    2013-07-01

    Due to the daily mass production and the widespread variation of medical X-ray images, it is necessary to classify these for searching and retrieving proposes, especially for content-based medical image retrieval systems. In this paper, a medical X-ray image hierarchical classification structure based on a novel merging and splitting scheme and using shape and texture features is proposed. In the first level of the proposed structure, to improve the classification performance, similar classes with regard to shape contents are grouped based on merging measures and shape features into the general overlapped classes. In the next levels of this structure, the overlapped classes split in smaller classes based on the classification performance of combination of shape and texture features or texture features only. Ultimately, in the last levels, this procedure is also continued forming all the classes, separately. Moreover, to optimize the feature vector in the proposed structure, we use orthogonal forward selection algorithm according to Mahalanobis class separability measure as a feature selection and reduction algorithm. In other words, according to the complexity and inter-class distance of each class, a sub-space of the feature space is selected in each level and then a supervised merging and splitting scheme is applied to form the hierarchical classification. The proposed structure is evaluated on a database consisting of 2158 medical X-ray images of 18 classes (IMAGECLEF 2005 database) and accuracy rate of 93.6% in the last level of the hierarchical structure for an 18-class classification problem is obtained.

  17. Deep Fully Convolutional Networks for the Detection of Informal Settlements in VHR Images

    NARCIS (Netherlands)

    Persello, Claudio; Stein, Alfred

    2017-01-01

    This letter investigates fully convolutional networks (FCNs) for the detection of informal settlements in very high resolution (VHR) satellite images. Informal settlements or slums are proliferating in developing countries and their detection and classification provides vital information for

  18. CLASSIFIER FUSION OF HIGH-RESOLUTION OPTICAL AND SYNTHETIC APERTURE RADAR (SAR SATELLITE IMAGERY FOR CLASSIFICATION IN URBAN AREA

    Directory of Open Access Journals (Sweden)

    T. Alipour Fard

    2014-10-01

    Full Text Available This study concerned with fusion of synthetic aperture radar and optical satellite imagery. Due to the difference in the underlying sensor technology, data from synthetic aperture radar (SAR and optical sensors refer to different properties of the observed scene and it is believed that when they are fused together, they complement each other to improve the performance of a particular application. In this paper, two category of features are generate and six classifier fusion operators implemented and evaluated. Implementation results show significant improvement in the classification accuracy.

  19. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  20. Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction

    Science.gov (United States)

    Su, X.

    2017-12-01

    A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.