WorldWideScience

Sample records for sensing image classification

  1. Classification of remotely sensed images

    CSIR Research Space (South Africa)

    Dudeni, N

    2008-10-01

    Full Text Available For this research, the researchers examine various existing image classification algorithms with the aim of demonstrating how these algorithms can be applied to remote sensing images. These algorithms are broadly divided into supervised...

  2. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    Science.gov (United States)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  3. Remote Sensing Image Classification Based on Stacked Denoising Autoencoder

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2017-12-01

    Full Text Available Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1 remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification.

  4. MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH

    Data.gov (United States)

    National Aeronautics and Space Administration — MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Multispectral remote sensing images have...

  5. APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN CLASSIFICATION OF HIGH RESOLUTION AGRICULTURAL REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    C. Yao

    2017-09-01

    Full Text Available With the rapid development of Precision Agriculture (PA promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN. For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  6. Feature extraction based on extended multi-attribute profiles and sparse autoencoder for remote sensing image classification

    Science.gov (United States)

    Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman

    2018-02-01

    The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.

  7. Combining low level features and visual attributes for VHR remote sensing image classification

    Science.gov (United States)

    Zhao, Fumin; Sun, Hao; Liu, Shuai; Zhou, Shilin

    2015-12-01

    Semantic classification of very high resolution (VHR) remote sensing images is of great importance for land use or land cover investigation. A large number of approaches exploiting different kinds of low level feature have been proposed in the literature. Engineers are often frustrated by their conclusions and a systematic assessment of various low level features for VHR remote sensing image classification is needed. In this work, we firstly perform an extensive evaluation of eight features including HOG, dense SIFT, SSIM, GIST, Geo color, LBP, Texton and Tiny images for classification of three public available datasets. Secondly, we propose to transfer ground level scene attributes to remote sensing images. Thirdly, we combine both low-level features and mid-level visual attributes to further improve the classification performance. Experimental results demonstrate that i) Dene SIFT and HOG features are more robust than other features for VHR scene image description. ii) Visual attribute competes with a combination of low level features. iii) Multiple feature combination achieves the best performance under different settings.

  8. Contribution of non-negative matrix factorization to the classification of remote sensing images

    Science.gov (United States)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  9. REMOTE SENSING IMAGE CLASSIFICATION APPLIED TO THE FIRST NATIONAL GEOGRAPHICAL INFORMATION CENSUS OF CHINA

    Directory of Open Access Journals (Sweden)

    X. Yu

    2016-06-01

    Full Text Available Image classification will still be a long way in the future, although it has gone almost half a century. In fact, researchers have gained many fruits in the image classification domain, but there is still a long distance between theory and practice. However, some new methods in the artificial intelligence domain will be absorbed into the image classification domain and draw on the strength of each to offset the weakness of the other, which will open up a new prospect. Usually, networks play the role of a high-level language, as is seen in Artificial Intelligence and statistics, because networks are used to build complex model from simple components. These years, Bayesian Networks, one of probabilistic networks, are a powerful data mining technique for handling uncertainty in complex domains. In this paper, we apply Tree Augmented Naive Bayesian Networks (TAN to texture classification of High-resolution remote sensing images and put up a new method to construct the network topology structure in terms of training accuracy based on the training samples. Since 2013, China government has started the first national geographical information census project, which mainly interprets geographical information based on high-resolution remote sensing images. Therefore, this paper tries to apply Bayesian network to remote sensing image classification, in order to improve image interpretation in the first national geographical information census project. In the experiment, we choose some remote sensing images in Beijing. Experimental results demonstrate TAN outperform than Naive Bayesian Classifier (NBC and Maximum Likelihood Classification Method (MLC in the overall classification accuracy. In addition, the proposed method can reduce the workload of field workers and improve the work efficiency. Although it is time consuming, it will be an attractive and effective method for assisting office operation of image interpretation.

  10. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    Science.gov (United States)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  11. Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification

    Science.gov (United States)

    Guo, Yiqing; Jia, Xiuping; Paull, David

    2018-06-01

    The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.

  12. Ship Detection and Classification on Optical Remote Sensing Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Liu Ying

    2017-01-01

    Full Text Available Ship detection and classification is critical for national maritime security and national defense. Although some SAR (Synthetic Aperture Radar image-based ship detection approaches have been proposed and used, they are not able to satisfy the requirement of real-world applications as the number of SAR sensors is limited, the resolution is low, and the revisit cycle is long. As massive optical remote sensing images of high resolution are available, ship detection and classification on theses images is becoming a promising technique, and has attracted great attention on applications including maritime security and traffic control. Some digital image processing methods have been proposed to detect ships in optical remote sensing images, but most of them face difficulty in terms of accuracy, performance and complexity. Recently, an autoencoder-based deep neural network with extreme learning machine was proposed, but it cannot meet the requirement of real-world applications as it only works with simple and small-scaled data sets. Therefore, in this paper, we propose a novel ship detection and classification approach which utilizes deep convolutional neural network (CNN as the ship classifier. The performance of our proposed ship detection and classification approach was evaluated on a set of images downloaded from Google Earth at the resolution 0.5m. 99% detection accuracy and 95% classification accuracy were achieved. In model training, 75× speedup is achieved on 1 Nvidia Titanx GPU.

  13. Training Small Networks for Scene Classification of Remote Sensing Images via Knowledge Distillation

    Directory of Open Access Journals (Sweden)

    Guanzhou Chen

    2018-05-01

    Full Text Available Scene classification, aiming to identify the land-cover categories of remotely sensed image patches, is now a fundamental task in the remote sensing image analysis field. Deep-learning-model-based algorithms are widely applied in scene classification and achieve remarkable performance, but these high-level methods are computationally expensive and time-consuming. Consequently in this paper, we introduce a knowledge distillation framework, currently a mainstream model compression method, into remote sensing scene classification to improve the performance of smaller and shallower network models. Our knowledge distillation training method makes the high-temperature softmax output of a small and shallow student model match the large and deep teacher model. In our experiments, we evaluate knowledge distillation training method for remote sensing scene classification on four public datasets: AID dataset, UCMerced dataset, NWPU-RESISC dataset, and EuroSAT dataset. Results show that our proposed training method was effective and increased overall accuracy (3% in AID experiments, 5% in UCMerced experiments, 1% in NWPU-RESISC and EuroSAT experiments for small and shallow models. We further explored the performance of the student model on small and unbalanced datasets. Our findings indicate that knowledge distillation can improve the performance of small network models on datasets with lower spatial resolution images, numerous categories, as well as fewer training samples.

  14. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    Directory of Open Access Journals (Sweden)

    Guizhou Wang

    2013-01-01

    Full Text Available This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine. Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy.

  15. Texture-based classification for characterizing regions on remote sensing images

    Science.gov (United States)

    Borne, Frédéric; Viennois, Gaëlle

    2017-07-01

    Remote sensing classification methods mostly use only the physical properties of pixels or complex texture indexes but do not lead to recommendation for practical applications. Our objective was to design a texture-based method, called the Paysages A PRIori method (PAPRI), which works both at pixel and neighborhood level and which can handle different spatial scales of analysis. The aim was to stay close to the logic of a human expert and to deal with co-occurrences in a more efficient way than other methods. The PAPRI method is pixelwise and based on a comparison of statistical and spatial reference properties provided by the expert with local properties computed in varying size windows centered on the pixel. A specific distance is computed for different windows around the pixel and a local minimum leads to choosing the class in which the pixel is to be placed. The PAPRI method brings a significant improvement in classification quality for different kinds of images, including aerial, lidar, high-resolution satellite images as well as texture images from the Brodatz and Vistex databases. This work shows the importance of texture analysis in understanding remote sensing images and for future developments.

  16. Supervised Classification High-Resolution Remote-Sensing Image Based on Interval Type-2 Fuzzy Membership Function

    Directory of Open Access Journals (Sweden)

    Chunyan Wang

    2018-05-01

    Full Text Available Because of the degradation of classification accuracy that is caused by the uncertainty of pixel class and classification decisions of high-resolution remote-sensing images, we proposed a supervised classification method that is based on an interval type-2 fuzzy membership function for high-resolution remote-sensing images. We analyze the data features of a high-resolution remote-sensing image and construct a type-1 membership function model in a homogenous region by supervised sampling in order to characterize the uncertainty of the pixel class. On the basis of the fuzzy membership function model in the homogeneous region and in accordance with the 3σ criterion of normal distribution, we proposed a method for modeling three types of interval type-2 membership functions and analyze the different types of functions to improve the uncertainty of pixel class expressed by the type-1 fuzzy membership function and to enhance the accuracy of classification decision. According to the principle that importance will increase with a decrease in the distance between the original, upper, and lower fuzzy membership of the training data and the corresponding frequency value in the histogram, we use the weighted average sum of three types of fuzzy membership as the new fuzzy membership of the pixel to be classified and then integrated into the neighborhood pixel relations, constructing a classification decision model. We use the proposed method to classify real high-resolution remote-sensing images and synthetic images. Additionally, we qualitatively and quantitatively evaluate the test results. The results show that a higher classification accuracy can be achieved with the proposed algorithm.

  17. [Object-oriented segmentation and classification of forest gap based on QuickBird remote sensing image.

    Science.gov (United States)

    Mao, Xue Gang; Du, Zi Han; Liu, Jia Qian; Chen, Shu Xin; Hou, Ji Yu

    2018-01-01

    Traditional field investigation and artificial interpretation could not satisfy the need of forest gaps extraction at regional scale. High spatial resolution remote sensing image provides the possibility for regional forest gaps extraction. In this study, we used object-oriented classification method to segment and classify forest gaps based on QuickBird high resolution optical remote sensing image in Jiangle National Forestry Farm of Fujian Province. In the process of object-oriented classification, 10 scales (10-100, with a step length of 10) were adopted to segment QuickBird remote sensing image; and the intersection area of reference object (RA or ) and intersection area of segmented object (RA os ) were adopted to evaluate the segmentation result at each scale. For segmentation result at each scale, 16 spectral characteristics and support vector machine classifier (SVM) were further used to classify forest gaps, non-forest gaps and others. The results showed that the optimal segmentation scale was 40 when RA or was equal to RA os . The accuracy difference between the maximum and minimum at different segmentation scales was 22%. At optimal scale, the overall classification accuracy was 88% (Kappa=0.82) based on SVM classifier. Combining high resolution remote sensing image data with object-oriented classification method could replace the traditional field investigation and artificial interpretation method to identify and classify forest gaps at regional scale.

  18. Supervised Classification Performance of Multispectral Images

    OpenAIRE

    Perumal, K.; Bhaskaran, R.

    2010-01-01

    Nowadays government and private agencies use remote sensing imagery for a wide range of applications from military applications to farm development. The images may be a panchromatic, multispectral, hyperspectral or even ultraspectral of terra bytes. Remote sensing image classification is one amongst the most significant application worlds for remote sensing. A few number of image classification algorithms have proved good precision in classifying remote sensing data. But, of late, due to the ...

  19. Tile-Based Semisupervised Classification of Large-Scale VHR Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Haikel Alhichri

    2018-01-01

    Full Text Available This paper deals with the problem of the classification of large-scale very high-resolution (VHR remote sensing (RS images in a semisupervised scenario, where we have a limited training set (less than ten training samples per class. Typical pixel-based classification methods are unfeasible for large-scale VHR images. Thus, as a practical and efficient solution, we propose to subdivide the large image into a grid of tiles and then classify the tiles instead of classifying pixels. Our proposed method uses the power of a pretrained convolutional neural network (CNN to first extract descriptive features from each tile. Next, a neural network classifier (composed of 2 fully connected layers is trained in a semisupervised fashion and used to classify all remaining tiles in the image. This basically presents a coarse classification of the image, which is sufficient for many RS application. The second contribution deals with the employment of the semisupervised learning to improve the classification accuracy. We present a novel semisupervised approach which exploits both the spectral and spatial relationships embedded in the remaining unlabelled tiles. In particular, we embed a spectral graph Laplacian in the hidden layer of the neural network. In addition, we apply regularization of the output labels using a spatial graph Laplacian and the random Walker algorithm. Experimental results obtained by testing the method on two large-scale images acquired by the IKONOS2 sensor reveal promising capabilities of this method in terms of classification accuracy even with less than ten training samples per class.

  20. Classification of high-resolution remote sensing images based on multi-scale superposition

    Science.gov (United States)

    Wang, Jinliang; Gao, Wenjie; Liu, Guangjie

    2017-07-01

    Landscape structures and process on different scale show different characteristics. In the study of specific target landmarks, the most appropriate scale for images can be attained by scale conversion, which improves the accuracy and efficiency of feature identification and classification. In this paper, the authors carried out experiments on multi-scale classification by taking the Shangri-la area in the north-western Yunnan province as the research area and the images from SPOT5 HRG and GF-1 Satellite as date sources. Firstly, the authors upscaled the two images by cubic convolution, and calculated the optimal scale for different objects on the earth shown in images by variation functions. Then the authors conducted multi-scale superposition classification on it by Maximum Likelyhood, and evaluated the classification accuracy. The results indicates that: (1) for most of the object on the earth, the optimal scale appears in the bigger scale instead of the original one. To be specific, water has the biggest optimal scale, i.e. around 25-30m; farmland, grassland, brushwood, roads, settlement places and woodland follows with 20-24m. The optimal scale for shades and flood land is basically as the same as the original one, i.e. 8m and 10m respectively. (2) Regarding the classification of the multi-scale superposed images, the overall accuracy of the ones from SPOT5 HRG and GF-1 Satellite is 12.84% and 14.76% higher than that of the original multi-spectral images, respectively, and Kappa coefficient is 0.1306 and 0.1419 higher, respectively. Hence, the multi-scale superposition classification which was applied in the research area can enhance the classification accuracy of remote sensing images .

  1. Land Cover Classification Using Integrated Spectral, Temporal, and Spatial Features Derived from Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Yongguang Zhai

    2018-03-01

    Full Text Available Obtaining accurate and timely land cover information is an important topic in many remote sensing applications. Using satellite image time series data should achieve high-accuracy land cover classification. However, most satellite image time-series classification methods do not fully exploit the available data for mining the effective features to identify different land cover types. Therefore, a classification method that can take full advantage of the rich information provided by time-series data to improve the accuracy of land cover classification is needed. In this paper, a novel method for time-series land cover classification using spectral, temporal, and spatial information at an annual scale was introduced. Based on all the available data from time-series remote sensing images, a refined nonlinear dimensionality reduction method was used to extract the spectral and temporal features, and a modified graph segmentation method was used to extract the spatial features. The proposed classification method was applied in three study areas with land cover complexity, including Illinois, South Dakota, and Texas. All the Landsat time series data in 2014 were used, and different study areas have different amounts of invalid data. A series of comparative experiments were conducted on the annual time-series images using training data generated from Cropland Data Layer. The results demonstrated higher overall and per-class classification accuracies and kappa index values using the proposed spectral-temporal-spatial method compared to spectral-temporal classification methods. We also discuss the implications of this study and possibilities for future applications and developments of the method.

  2. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    Directory of Open Access Journals (Sweden)

    Linyi Li

    2017-01-01

    Full Text Available In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  3. A Method of Particle Swarm Optimized SVM Hyper-spectral Remote Sensing Image Classification

    International Nuclear Information System (INIS)

    Liu, Q J; Jing, L H; Wang, L M; Lin, Q Z

    2014-01-01

    Support Vector Machine (SVM) has been proved to be suitable for classification of remote sensing image and proposed to overcome the Hughes phenomenon. Hyper-spectral sensors are intrinsically designed to discriminate among a broad range of land cover classes which may lead to high computational time in SVM mutil-class algorithms. Model selection for SVM involving kernel and the margin parameter values selection which is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyper-spectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, particle swarm algorithm is introduced to the optimal selection of SVM (PSSVM) kernel parameter σ and margin parameter C to improve the modelling efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for evaluating the novel PSSVM, as well as traditional SVM classifier with general Grid-Search cross-validation method (GSSVM). And then, evaluation indexes including SVM model training time, classification Overall Accuracy (OA) and Kappa index of both PSSVM and GSSVM are all analyzed quantitatively. It is demonstrated that OA of PSSVM on test samples and whole image are 85% and 82%, the differences with that of GSSVM are both within 0.08% respectively. And Kappa indexes reach 0.82 and 0.77, the differences with that of GSSVM are both within 0.001. While the modelling time of PSSVM can be only 1/10 of that of GSSVM, and the modelling. Therefore, PSSVM is an fast and accurate algorithm for hyper-spectral image classification and is superior to GSSVM

  4. A ROUGH SET DECISION TREE BASED MLP-CNN FOR VERY HIGH RESOLUTION REMOTELY SENSED IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    C. Zhang

    2017-09-01

    Full Text Available Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP, which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  5. A patch-based convolutional neural network for remote sensing image classification.

    Science.gov (United States)

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Cluster Validity Classification Approaches Based on Geometric Probability and Application in the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    LI Jian-Wei

    2014-08-01

    Full Text Available On the basis of the cluster validity function based on geometric probability in literature [1, 2], propose a cluster analysis method based on geometric probability to process large amount of data in rectangular area. The basic idea is top-down stepwise refinement, firstly categories then subcategories. On all clustering levels, use the cluster validity function based on geometric probability firstly, determine clusters and the gathering direction, then determine the center of clustering and the border of clusters. Through TM remote sensing image classification examples, compare with the supervision and unsupervised classification in ERDAS and the cluster analysis method based on geometric probability in two-dimensional square which is proposed in literature 2. Results show that the proposed method can significantly improve the classification accuracy.

  7. Utility of BRDF Models for Estimating Optimal View Angles in Classification of Remotely Sensed Images

    Science.gov (United States)

    Valdez, P. F.; Donohoe, G. W.

    1997-01-01

    Statistical classification of remotely sensed images attempts to discriminate between surface cover types on the basis of the spectral response recorded by a sensor. It is well known that surfaces reflect incident radiation as a function of wavelength producing a spectral signature specific to the material under investigation. Multispectral and hyperspectral sensors sample the spectral response over tens and even hundreds of wavelength bands to capture the variation of spectral response with wavelength. Classification algorithms then exploit these differences in spectral response to distinguish between materials of interest. Sensors of this type, however, collect detailed spectral information from one direction (usually nadir); consequently, do not consider the directional nature of reflectance potentially detectable at different sensor view angles. Improvements in sensor technology have resulted in remote sensing platforms capable of detecting reflected energy across wavelengths (spectral signatures) and from multiple view angles (angular signatures) in the fore and aft directions. Sensors of this type include: the moderate resolution imaging spectroradiometer (MODIS), the multiangle imaging spectroradiometer (MISR), and the airborne solid-state array spectroradiometer (ASAS). A goal of this paper, then, is to explore the utility of Bidirectional Reflectance Distribution Function (BRDF) models in the selection of optimal view angles for the classification of remotely sensed images by employing a strategy of searching for the maximum difference between surface BRDFs. After a brief discussion of directional reflect ante in Section 2, attention is directed to the Beard-Maxwell BRDF model and its use in predicting the bidirectional reflectance of a surface. The selection of optimal viewing angles is addressed in Section 3, followed by conclusions and future work in Section 4.

  8. Remote sensing image fusion

    CERN Document Server

    Alparone, Luciano; Baronti, Stefano; Garzelli, Andrea

    2015-01-01

    A synthesis of more than ten years of experience, Remote Sensing Image Fusion covers methods specifically designed for remote sensing imagery. The authors supply a comprehensive classification system and rigorous mathematical description of advanced and state-of-the-art methods for pansharpening of multispectral images, fusion of hyperspectral and panchromatic images, and fusion of data from heterogeneous sensors such as optical and synthetic aperture radar (SAR) images and integration of thermal and visible/near-infrared images. They also explore new trends of signal/image processing, such as

  9. Fusion of shallow and deep features for classification of high-resolution remote sensing images

    Science.gov (United States)

    Gao, Lang; Tian, Tian; Sun, Xiao; Li, Hang

    2018-02-01

    Effective spectral and spatial pixel description plays a significant role for the classification of high resolution remote sensing images. Current approaches of pixel-based feature extraction are of two main kinds: one includes the widelyused principal component analysis (PCA) and gray level co-occurrence matrix (GLCM) as the representative of the shallow spectral and shape features, and the other refers to the deep learning-based methods which employ deep neural networks and have made great promotion on classification accuracy. However, the former traditional features are insufficient to depict complex distribution of high resolution images, while the deep features demand plenty of samples to train the network otherwise over fitting easily occurs if only limited samples are involved in the training. In view of the above, we propose a GLCM-based convolution neural network (CNN) approach to extract features and implement classification for high resolution remote sensing images. The employment of GLCM is able to represent the original images and eliminate redundant information and undesired noises. Meanwhile, taking shallow features as the input of deep network will contribute to a better guidance and interpretability. In consideration of the amount of samples, some strategies such as L2 regularization and dropout methods are used to prevent over-fitting. The fine-tuning strategy is also used in our study to reduce training time and further enhance the generalization performance of the network. Experiments with popular data sets such as PaviaU data validate that our proposed method leads to a performance improvement compared to individual involved approaches.

  10. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    Science.gov (United States)

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  11. The Improvement of Land Cover Classification by Thermal Remote Sensing

    Directory of Open Access Journals (Sweden)

    Liya Sun

    2015-06-01

    Full Text Available Land cover classification has been widely investigated in remote sensing for agricultural, ecological and hydrological applications. Landsat images with multispectral bands are commonly used to study the numerous classification methods in order to improve the classification accuracy. Thermal remote sensing provides valuable information to investigate the effectiveness of the thermal bands in extracting land cover patterns. k-NN and Random Forest algorithms were applied to both the single Landsat 8 image and the time series Landsat 4/5 images for the Attert catchment in the Grand Duchy of Luxembourg, trained and validated by the ground-truth reference data considering the three level classification scheme from COoRdination of INformation on the Environment (CORINE using the 10-fold cross validation method. The accuracy assessment showed that compared to the visible and near infrared (VIS/NIR bands, the time series of thermal images alone can produce comparatively reliable land cover maps with the best overall accuracy of 98.7% to 99.1% for Level 1 classification and 93.9% to 96.3% for the Level 2 classification. In addition, the combination with the thermal band improves the overall accuracy by 5% and 6% for the single Landsat 8 image in Level 2 and Level 3 category and provides the best classified results with all seven bands for the time series of Landsat TM images.

  12. Virtual Satellite Construction and Application for Image Classification

    International Nuclear Information System (INIS)

    Su, W G; Su, F Z; Zhou, C H

    2014-01-01

    Nowadays, most remote sensing image classification uses single satellite remote sensing data, so the number of bands and band spectral width is consistent. In addition, observed phenomenon such as land cover have the same spectral signature, which causes the classification accuracy to decrease as different data have unique characteristic. Therefore, this paper analyzes different optical remote sensing satellites, comparing the spectral differences and proposes the ideas and methods to build a virtual satellite. This article illustrates the research on the TM, HJ-1 and MODIS data. We obtained the virtual band X 0 through these satellites' bands combined it with the 4 bands of a TM image to build a virtual satellite with five bands. Based on this, we used these data for image classification. The experimental results showed that the virtual satellite classification results of building land and water information were superior to the HJ-1 and TM data respectively

  13. Realizing parameterless automatic classification of remote sensing imagery using ontology engineering and cyberinfrastructure techniques

    Science.gov (United States)

    Sun, Ziheng; Fang, Hui; Di, Liping; Yue, Peng

    2016-09-01

    It was an untouchable dream for remote sensing experts to realize total automatic image classification without inputting any parameter values. Experts usually spend hours and hours on tuning the input parameters of classification algorithms in order to obtain the best results. With the rapid development of knowledge engineering and cyberinfrastructure, a lot of data processing and knowledge reasoning capabilities become online accessible, shareable and interoperable. Based on these recent improvements, this paper presents an idea of parameterless automatic classification which only requires an image and automatically outputs a labeled vector. No parameters and operations are needed from endpoint consumers. An approach is proposed to realize the idea. It adopts an ontology database to store the experiences of tuning values for classifiers. A sample database is used to record training samples of image segments. Geoprocessing Web services are used as functionality blocks to finish basic classification steps. Workflow technology is involved to turn the overall image classification into a total automatic process. A Web-based prototypical system named PACS (Parameterless Automatic Classification System) is implemented. A number of images are fed into the system for evaluation purposes. The results show that the approach could automatically classify remote sensing images and have a fairly good average accuracy. It is indicated that the classified results will be more accurate if the two databases have higher quality. Once the experiences and samples in the databases are accumulated as many as an expert has, the approach should be able to get the results with similar quality to that a human expert can get. Since the approach is total automatic and parameterless, it can not only relieve remote sensing workers from the heavy and time-consuming parameter tuning work, but also significantly shorten the waiting time for consumers and facilitate them to engage in image

  14. Land use/cover classification in the Brazilian Amazon using satellite images.

    Science.gov (United States)

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  15. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  16. Improving Remote Sensing Scene Classification by Integrating Global-Context and Local-Object Features

    Directory of Open Access Journals (Sweden)

    Dan Zeng

    2018-05-01

    Full Text Available Recently, many researchers have been dedicated to using convolutional neural networks (CNNs to extract global-context features (GCFs for remote-sensing scene classification. Commonly, accurate classification of scenes requires knowledge about both the global context and local objects. However, unlike the natural images in which the objects cover most of the image, objects in remote-sensing images are generally small and decentralized. Thus, it is hard for vanilla CNNs to focus on both global context and small local objects. To address this issue, this paper proposes a novel end-to-end CNN by integrating the GCFs and local-object-level features (LOFs. The proposed network includes two branches, the local object branch (LOB and global semantic branch (GSB, which are used to generate the LOFs and GCFs, respectively. Then, the concatenation of features extracted from the two branches allows our method to be more discriminative in scene classification. Three challenging benchmark remote-sensing datasets were extensively experimented on; the proposed approach outperformed the existing scene classification methods and achieved state-of-the-art results for all three datasets.

  17. Ontology-based classification of remote sensing images using spectral rules

    Science.gov (United States)

    Andrés, Samuel; Arvor, Damien; Mougenot, Isabelle; Libourel, Thérèse; Durieux, Laurent

    2017-05-01

    Earth Observation data is of great interest for a wide spectrum of scientific domain applications. An enhanced access to remote sensing images for "domain" experts thus represents a great advance since it allows users to interpret remote sensing images based on their domain expert knowledge. However, such an advantage can also turn into a major limitation if this knowledge is not formalized, and thus is difficult for it to be shared with and understood by other users. In this context, knowledge representation techniques such as ontologies should play a major role in the future of remote sensing applications. We implemented an ontology-based prototype to automatically classify Landsat images based on explicit spectral rules. The ontology is designed in a very modular way in order to achieve a generic and versatile representation of concepts we think of utmost importance in remote sensing. The prototype was tested on four subsets of Landsat images and the results confirmed the potential of ontologies to formalize expert knowledge and classify remote sensing images.

  18. A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Stelios K. Mylonas

    2015-03-01

    Full Text Available This paper proposes an object-based segmentation/classification scheme for remotely sensed images, based on a novel variant of the recently proposed Genetic Sequential Image Segmentation (GeneSIS algorithm. GeneSIS segments the image in an iterative manner, whereby at each iteration a single object is extracted via a genetic-based object extraction algorithm. Contrary to the previous pixel-based GeneSIS where the candidate objects to be extracted were evaluated through the fuzzy content of their included pixels, in the newly developed region-based GeneSIS algorithm, a watershed-driven fine segmentation map is initially obtained from the original image, which serves as the basis for the forthcoming GeneSIS segmentation. Furthermore, in order to enhance the spatial search capabilities, we introduce a more descriptive encoding scheme in the object extraction algorithm, where the structural search modules are represented by polygonal shapes. Our objectives in the new framework are posed as follows: enhance the flexibility of the algorithm in extracting more flexible object shapes, assure high level classification accuracies, and reduce the execution time of the segmentation, while at the same time preserving all the inherent attributes of the GeneSIS approach. Finally, exploiting the inherent attribute of GeneSIS to produce multiple segmentations, we also propose two segmentation fusion schemes that operate on the ensemble of segmentations generated by GeneSIS. Our approaches are tested on an urban and two agricultural images. The results show that region-based GeneSIS has considerably lower computational demands compared to the pixel-based one. Furthermore, the suggested methods achieve higher classification accuracies and good segmentation maps compared to a series of existing algorithms.

  19. Defuzzification Strategies for Fuzzy Classifications of Remote Sensing Data

    Directory of Open Access Journals (Sweden)

    Peter Hofmann

    2016-06-01

    Full Text Available The classes in fuzzy classification schemes are defined as fuzzy sets, partitioning the feature space through fuzzy rules, defined by fuzzy membership functions. Applying fuzzy classification schemes in remote sensing allows each pixel or segment to be an incomplete member of more than one class simultaneously, i.e., one that does not fully meet all of the classification criteria for any one of the classes and is member of more than one class simultaneously. This can lead to fuzzy, ambiguous and uncertain class assignation, which is unacceptable for many applications, indicating the need for a reliable defuzzification method. Defuzzification in remote sensing has to date, been performed by “crisp-assigning” each fuzzy-classified pixel or segment to the class for which it best fulfills the fuzzy classification rules, regardless of its classification fuzziness, uncertainty or ambiguity (maximum method. The defuzzification of an uncertain or ambiguous fuzzy classification leads to a more or less reliable crisp classification. In this paper the most common parameters for expressing classification uncertainty, fuzziness and ambiguity are analysed and discussed in terms of their ability to express the reliability of a crisp classification. This is done by means of a typical practical example from Object Based Image Analysis (OBIA.

  20. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification

    Science.gov (United States)

    Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.

    2018-06-01

    The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.

  1. Integration of heterogeneous features for remote sensing scene classification

    Science.gov (United States)

    Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang

    2018-01-01

    Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.

  2. Towards automatic lithological classification from remote sensing data using support vector machines

    Science.gov (United States)

    Yu, Le; Porwal, Alok; Holden, Eun-Jung; Dentith, Michael

    2010-05-01

    Remote sensing data can be effectively used as a mean to build geological knowledge for poorly mapped terrains. Spectral remote sensing data from space- and air-borne sensors have been widely used to geological mapping, especially in areas of high outcrop density in arid regions. However, spectral remote sensing information by itself cannot be efficiently used for a comprehensive lithological classification of an area due to (1) diagnostic spectral response of a rock within an image pixel is conditioned by several factors including the atmospheric effects, spectral and spatial resolution of the image, sub-pixel level heterogeneity in chemical and mineralogical composition of the rock, presence of soil and vegetation cover; (2) only surface information and is therefore highly sensitive to the noise due to weathering, soil cover, and vegetation. Consequently, for efficient lithological classification, spectral remote sensing data needs to be supplemented with other remote sensing datasets that provide geomorphological and subsurface geological information, such as digital topographic model (DEM) and aeromagnetic data. Each of the datasets contain significant information about geology that, in conjunction, can potentially be used for automated lithological classification using supervised machine learning algorithms. In this study, support vector machine (SVM), which is a kernel-based supervised learning method, was applied to automated lithological classification of a study area in northwestern India using remote sensing data, namely, ASTER, DEM and aeromagnetic data. Several digital image processing techniques were used to produce derivative datasets that contained enhanced information relevant to lithological discrimination. A series of SVMs (trained using k-folder cross-validation with grid search) were tested using various combinations of input datasets selected from among 50 datasets including the original 14 ASTER bands and 36 derivative datasets (including 14

  3. Collaborative classification of hyperspectral and visible images with convolutional neural network

    Science.gov (United States)

    Zhang, Mengmeng; Li, Wei; Du, Qian

    2017-10-01

    Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.

  4. Image Classification Workflow Using Machine Learning Methods

    Science.gov (United States)

    Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.

    2016-12-01

    Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.

  5. [Object-oriented stand type classification based on the combination of multi-source remote sen-sing data].

    Science.gov (United States)

    Mao, Xue Gang; Wei, Jing Yu

    2017-11-01

    The recognition of forest type is one of the key problems in forest resource monitoring. The Radarsat-2 data and QuickBird remote sensing image were used for object-based classification to study the object-based forest type classification and recognition based on the combination of multi-source remote sensing data. In the process of object-based classification, three segmentation schemes (segmentation with QuickBird remote sensing image only, segmentation with Radarsat-2 data only, segmentation with combination of QuickBird and Radarsat-2) were adopted. For the three segmentation schemes, ten segmentation scale parameters were adopted (25-250, step 25), and modified Euclidean distance 3 index was further used to evaluate the segmented results to determine the optimal segmentation scheme and segmentation scale. Based on the optimal segmented result, three forest types of Chinese fir, Masson pine and broad-leaved forest were classified and recognized using Support Vector Machine (SVM) classifier with Radial Basis Foundation (RBF) kernel according to different feature combinations of topography, height, spectrum and common features. The results showed that the combination of Radarsat-2 data and QuickBird remote sensing image had its advantages of object-based forest type classification over using Radarsat-2 data or QuickBird remote sensing image only. The optimal scale parameter for QuickBirdRadarsat-2 segmentation was 100, and at the optimal scale, the accuracy of object-based forest type classification was the highest (OA=86%, Kappa=0.86), when using all features which were extracted from two kinds of data resources. This study could not only provide a reference for forest type recognition using multi-source remote sensing data, but also had a practical significance for forest resource investigation and monitoring.

  6. Classification of Pansharpened Urban Satellite Images

    DEFF Research Database (Denmark)

    Palsson, Frosti; Sveinsson, Johannes R.; Benediktsson, Jon Atli

    2012-01-01

    The classification of high resolution urban remote sensing imagery is addressed with the focus on classification of imagery that has been pansharpened by a number of different pansharpening methods. The pansharpening process introduces some spectral and spatial distortions in the resulting fused...... multispectral image, the amount of which highly varies depending on which pansharpening technique is used. In the majority of the pansharpening techniques that have been proposed, there is a compromise between the spatial enhancement and the spectral consistency. Here we study the effects of the spectral...... information from the panchromatic data. Random Forests (RF) and Support Vector Machines (SVM) will be used as classifiers. Experiments are done for three different datasets that have been obtained by two different imaging sensors, IKONOS and QuickBird. These sensors deliver multispectral images that have four...

  7. Polarimetric SAR image classification based on discriminative dictionary learning model

    Science.gov (United States)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  8. A method to incorporate uncertainty in the classification of remote sensing images

    OpenAIRE

    Gonçalves, Luísa M. S.; Fonte, Cidália C.; Júlio, Eduardo N. B. S.; Caetano, Mario

    2009-01-01

    The aim of this paper is to investigate if the incorporation of the uncertainty associated with the classification of surface elements into the classification of landscape units (LUs) increases the results accuracy. To this end, a hybrid classification method is developed, including uncertainty information in the classification of very high spatial resolution multi-spectral satellite images, to obtain a map of LUs. The developed classification methodology includes the following...

  9. Object-based vegetation classification with high resolution remote sensing imagery

    Science.gov (United States)

    Yu, Qian

    Vegetation species are valuable indicators to understand the earth system. Information from mapping of vegetation species and community distribution at large scales provides important insight for studying the phenological (growth) cycles of vegetation and plant physiology. Such information plays an important role in land process modeling including climate, ecosystem and hydrological models. The rapidly growing remote sensing technology has increased its potential in vegetation species mapping. However, extracting information at a species level is still a challenging research topic. I proposed an effective method for extracting vegetation species distribution from remotely sensed data and investigated some ways for accuracy improvement. The study consists of three phases. Firstly, a statistical analysis was conducted to explore the spatial variation and class separability of vegetation as a function of image scale. This analysis aimed to confirm that high resolution imagery contains the information on spatial vegetation variation and these species classes can be potentially separable. The second phase was a major effort in advancing classification by proposing a method for extracting vegetation species from high spatial resolution remote sensing data. The proposed classification employs an object-based approach that integrates GIS and remote sensing data and explores the usefulness of ancillary information. The whole process includes image segmentation, feature generation and selection, and nearest neighbor classification. The third phase introduces a spatial regression model for evaluating the mapping quality from the above vegetation classification results. The effects of six categories of sample characteristics on the classification uncertainty are examined: topography, sample membership, sample density, spatial composition characteristics, training reliability and sample object features. This evaluation analysis answered several interesting scientific questions

  10. APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Jabari

    2017-08-01

    Full Text Available Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan camera along with either a colour camera or a four-band multi-spectral (MS camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC. We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  11. Application of Sensor Fusion to Improve Uav Image Classification

    Science.gov (United States)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  12. RESEARCH ON REMOTE SENSING GEOLOGICAL INFORMATION EXTRACTION BASED ON OBJECT ORIENTED CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Gao

    2018-04-01

    Full Text Available The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  13. Pixel-Wise Classification Method for High Resolution Remote Sensing Imagery Using Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2018-03-01

    Full Text Available Considering the classification of high spatial resolution remote sensing imagery, this paper presents a novel classification method for such imagery using deep neural networks. Deep learning methods, such as a fully convolutional network (FCN model, achieve state-of-the-art performance in natural image semantic segmentation when provided with large-scale datasets and respective labels. To use data efficiently in the training stage, we first pre-segment training images and their labels into small patches as supplements of training data using graph-based segmentation and the selective search method. Subsequently, FCN with atrous convolution is used to perform pixel-wise classification. In the testing stage, post-processing with fully connected conditional random fields (CRFs is used to refine results. Extensive experiments based on the Vaihingen dataset demonstrate that our method performs better than the reference state-of-the-art networks when applied to high-resolution remote sensing imagery classification.

  14. Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2016-06-01

    Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.

  15. Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification

    Science.gov (United States)

    Anwer, Rao Muhammad; Khan, Fahad Shahbaz; van de Weijer, Joost; Molinier, Matthieu; Laaksonen, Jorma

    2018-04-01

    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.

  16. Assessment of Sampling Approaches for Remote Sensing Image Classification in the Iranian Playa Margins

    Science.gov (United States)

    Kazem Alavipanah, Seyed

    There are some problems in soil salinity studies based upon remotely sensed data: 1-spectral world is full of ambiguity and therefore soil reflectance can not be attributed to a single soil property such as salinity, 2) soil surface conditions as a function of time and space is a complex phenomena, 3) vegetation with a dynamic biological nature may create some problems in the study of soil salinity. Due to these problems the first question which may arise is how to overcome or minimise these problems. In this study we hypothesised that different sources of data, well established sampling plan and optimum approach could be useful. In order to choose representative training sites in the Iranian playa margins, to define the spectral and informational classes and to overcome some problems encountered in the variation within the field, the following attempts were made: 1) Principal Component Analysis (PCA) in order: a) to determine the most important variables, b) to understand the Landsat satellite images and the most informative components, 2) the photomorphic unit (PMU) consideration and interpretation; 3) study of salt accumulation and salt distribution in the soil profile, 4) use of several forms of field data, such as geologic, geomorphologic and soil information; 6) confirmation of field data and land cover types with farmers and the members of the team. The results led us to find at suitable approaches with a high and acceptable image classification accuracy and image interpretation. KEY WORDS; Photo Morphic Unit, Pprincipal Ccomponent Analysis, Soil Salinity, Field Work, Remote Sensing

  17. Classification and overview of research in real-time imaging

    Science.gov (United States)

    Sinha, Purnendu; Gorinsky, Sergey V.; Laplante, Phillip A.; Stoyenko, Alexander D.; Marlowe, Thomas J.

    1996-10-01

    Real-time imaging has application in areas such as multimedia, virtual reality, medical imaging, and remote sensing and control. Recently, the imaging community has witnessed a tremendous growth in research and new ideas in these areas. To lend structure to this growth, we outline a classification scheme and provide an overview of current research in real-time imaging. For convenience, we have categorized references by research area and application.

  18. Modeling Habitat Suitability of Migratory Birds from Remote Sensing Images Using Convolutional Neural Networks.

    Science.gov (United States)

    Su, Jin-He; Piao, Ying-Chao; Luo, Ze; Yan, Bao-Ping

    2018-04-26

    With the application of various data acquisition devices, a large number of animal movement data can be used to label presence data in remote sensing images and predict species distribution. In this paper, a two-stage classification approach for combining movement data and moderate-resolution remote sensing images was proposed. First, we introduced a new density-based clustering method to identify stopovers from migratory birds’ movement data and generated classification samples based on the clustering result. We split the remote sensing images into 16 × 16 patches and labeled them as positive samples if they have overlap with stopovers. Second, a multi-convolution neural network model is proposed for extracting the features from temperature data and remote sensing images, respectively. Then a Support Vector Machines (SVM) model was used to combine the features together and predict classification results eventually. The experimental analysis was carried out on public Landsat 5 TM images and a GPS dataset was collected on 29 birds over three years. The results indicated that our proposed method outperforms the existing baseline methods and was able to achieve good performance in habitat suitability prediction.

  19. Satellite Image Classification of Building Damages Using Airborne and Satellite Image Samples in a Deep Learning Approach

    Science.gov (United States)

    Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.

    2018-05-01

    The localization and detailed assessment of damaged buildings after a disastrous event is of utmost importance to guide response operations, recovery tasks or for insurance purposes. Several remote sensing platforms and sensors are currently used for the manual detection of building damages. However, there is an overall interest in the use of automated methods to perform this task, regardless of the used platform. Owing to its synoptic coverage and predictable availability, satellite imagery is currently used as input for the identification of building damages by the International Charter, as well as the Copernicus Emergency Management Service for the production of damage grading and reference maps. Recently proposed methods to perform image classification of building damages rely on convolutional neural networks (CNN). These are usually trained with only satellite image samples in a binary classification problem, however the number of samples derived from these images is often limited, affecting the quality of the classification results. The use of up/down-sampling image samples during the training of a CNN, has demonstrated to improve several image recognition tasks in remote sensing. However, it is currently unclear if this multi resolution information can also be captured from images with different spatial resolutions like satellite and airborne imagery (from both manned and unmanned platforms). In this paper, a CNN framework using residual connections and dilated convolutions is used considering both manned and unmanned aerial image samples to perform the satellite image classification of building damages. Three network configurations, trained with multi-resolution image samples are compared against two benchmark networks where only satellite image samples are used. Combining feature maps generated from airborne and satellite image samples, and refining these using only the satellite image samples, improved nearly 4 % the overall satellite image

  20. A classification model of Hyperion image base on SAM combined decision tree

    Science.gov (United States)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model

  1. IMPACTS OF PATCH SIZE AND LANDSCAPE HETEROGENEITY ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    Science.gov (United States)

    Impacts of Patch Size and Landscape Heterogeneity on Thematic Image Classification Accuracy. Currently, most thematic accuracy assessments of classified remotely sensed images oily account for errors between the various classes employed, at particular pixels of interest, thu...

  2. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2015-11-01

    Full Text Available Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs, which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.

  3. Terahertz wave reflective sensing and imaging

    Science.gov (United States)

    Zhong, Hua

    Sensing and imaging technologies using terahertz (THz) radiation have found diverse applications as they approach maturity. Since the burgeoning of this technique in the 1990's, many THz sensing and imaging investigations have been designed and conducted in transmission geometry, which provides sufficient phase and amplitude contrast for the study of the spectral properties of targets in the THz domain. Driven by rising expectations that THz technology will be a potential candidate in the next generation of security screening, remote sensing, biomedical imaging and non-destructive testing (NDT), most THz sensing and imaging modalities are being extended to reflection geometry, which offers unique and adaptive solutions, and multi-dimensional information in many real scenarios. This thesis takes an application-focused approach to the advancement of THz wave reflective sensing and imaging systems: The absorption signature of the explosive material hexahydro-1,3,5-trinitro-1,3,5triazine (RDX) is measured at 30 m---the longest standoff distance so far attained by THz time-domain spectroscopy (THz-TDS). The standoff distance sensing ability of THz-TDS is investigated along with discussions specifying the influences of a variety of factors such as propagation distance, water vapor absorption and collection efficiency. Highly directional THz radiation from four-wave mixing in laser-induced air plasmas is first observed and measured, which provides a potential solution for the atmospheric absorption effect in standoff THz sensing. The simulations of the beam profiles also illuminate the underlying physics behind the interaction of the optical beam with the plasma. THz wave reflective spectroscopic focal-plane imaging is realized the first time. Absorption features of some explosives and related compounds (ERCs) and biochemical materials are identified by using adaptive feature extraction method. Good classification results using multiple pattern recognition methods are

  4. Evaluation of Different Methods for Soil Classifications by Using Geographic Information Systems and Remote Sensing

    Directory of Open Access Journals (Sweden)

    S. H Sanaeinejad

    2012-12-01

    Full Text Available Soil salinity is an important factor that affects plant growth and reduces production of plantat different growth stages Remote sensing technology and GIS have a great potential for monitoring dynamic soil processes such as salinity. In the present study the efficiency of remote sensing technology and its integration with GIS was examined to estimate soil salinity for Neyshabour basin. Different classification methods for soil salinity were also investigated. We used 6 bands of LandSat ETM+ for this study. Classification results obtained from applying mathematical models for the images were compared with different band combinations results. The area of saline and non saline soil classes were identified in the study area based on the both methods and also based on the combination of the two methods. The results showed that the best method for soil classification was using of the two methods in the first stage to separate two classes of saline and non saline soils and then classifying the non saline soils in the second stage. As the variation in the numerical values of the image for different soil salinity in the study area was small, it was concluded that there is a limit potential of LandSat ETM+ images for identifying and classification of soil salinity in such an area.

  5. End-to-End Airplane Detection Using Transfer Learning in Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Zhong Chen

    2018-01-01

    Full Text Available Airplane detection in remote sensing images remains a challenging problem due to the complexity of backgrounds. In recent years, with the development of deep learning, object detection has also obtained great breakthroughs. For object detection tasks in natural images, such as the PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning VOC (Visual Object Classes Challenge, the major trend of current development is to use a large amount of labeled classification data to pre-train the deep neural network as a base network, and then use a small amount of annotated detection data to fine-tune the network for detection. In this paper, we use object detection technology based on deep learning for airplane detection in remote sensing images. In addition to using some characteristics of remote sensing images, some new data augmentation techniques have been proposed. We also use transfer learning and adopt a single deep convolutional neural network and limited training samples to implement end-to-end trainable airplane detection. Classification and positioning are no longer divided into multistage tasks; end-to-end detection attempts to combine them for optimization, which ensures an optimal solution for the final stage. In our experiment, we use remote sensing images of airports collected from Google Earth. The experimental results show that the proposed algorithm is highly accurate and meaningful for remote sensing object detection.

  6. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  7. An Improved Rotation Forest for Multi-Feature Remote-Sensing Imagery Classification

    Directory of Open Access Journals (Sweden)

    Yingchang Xiu

    2017-11-01

    Full Text Available Multi-feature, especially multi-temporal, remote-sensing data have the potential to improve land cover classification accuracy. However, sometimes it is difficult to utilize all the features efficiently. To enhance classification performance based on multi-feature imagery, an improved rotation forest, combining Principal Component Analysis (PCA and a boosting naïve Bayesian tree (NBTree, is proposed. First, feature extraction was carried out with PCA. The feature set was randomly split into several disjoint subsets; then, PCA was applied to each subset, and new training data for linear extracted features based on original training data were obtained. These steps were repeated several times. Second, based on the new training data, a boosting naïve Bayesian tree was constructed as the base classifier, which aims to achieve lower prediction error than a decision tree in the original rotation forest. At the classification phase, the improved rotation forest has two-layer voting. It first obtains several predictions through weighted voting in a boosting naïve Bayesian tree; then, the first-layer vote predicts by majority to obtain the final result. To examine the classification performance, the improved rotation forest was applied to multi-feature remote-sensing images, including MODIS Enhanced Vegetation Index (EVI imagery time series, MODIS Surface Reflectance products and ancillary data in Shandong Province for 2013. The EVI imagery time series was preprocessed using harmonic analysis of time series (HANTS to reduce the noise effects. The overall accuracy of the final classification result was 89.17%, and the Kappa coefficient was 0.71, which outperforms the original rotation forest and other classifier ensemble results, as well as the NASA land cover product. However, this new algorithm requires more computational time, meaning the efficiency needs to be further improved. Generally, the improved rotation forest has a potential advantage in

  8. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    Directory of Open Access Journals (Sweden)

    Ying Cai

    2012-09-01

    Full Text Available In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT, the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3% and overall (92.0%–93.1% accuracies. Our

  9. Remote sensing mapping of macroalgal farms by modifying thresholds in the classification tree

    KAUST Repository

    Zheng, Yuhan

    2018-05-07

    Remote sensing is the main approach used to classify and map aquatic vegetation, and classification tree (CT) analysis is superior to various classification methods. Based on previous studies, modified CT can be developed from traditional CT by adjusting the thresholds based on the statistical relationship between spectral features to classify different images without ground-truth data. However, no studies have yet employed this method to resolve marine vegetation. In this study, three Gao-Fen 1 satellite images obtained with the same sensor on January 30, 2014, November 5, 2014, and January 21, 2015 were selected, and two features were then employed to extract macroalgae from aquaculture farms from the seawater background. Besides, object-based classification and other image analysis methods were adopted to improve the classification accuracy in this study. Results show that the overall accuracies of traditional CTs for three images are 92.0%, 94.2% and 93.9%, respectively, whereas the overall accuracies of the two corresponding modified CTs for images obtained on January 21, 2015 and November 5, 2014 are 93.1% and 89.5%, respectively. This indicates modified CTs can help map macroalgae with multi-date imagery and monitor the spatiotemporal distribution of macroalgae in coastal environments.

  10. Remote sensing mapping of macroalgal farms by modifying thresholds in the classification tree

    KAUST Repository

    Zheng, Yuhan; Duarte, Carlos M.; Chen, Jiang; Li, Dan; Lou, Zhaohan; Wu, Jiaping

    2018-01-01

    Remote sensing is the main approach used to classify and map aquatic vegetation, and classification tree (CT) analysis is superior to various classification methods. Based on previous studies, modified CT can be developed from traditional CT by adjusting the thresholds based on the statistical relationship between spectral features to classify different images without ground-truth data. However, no studies have yet employed this method to resolve marine vegetation. In this study, three Gao-Fen 1 satellite images obtained with the same sensor on January 30, 2014, November 5, 2014, and January 21, 2015 were selected, and two features were then employed to extract macroalgae from aquaculture farms from the seawater background. Besides, object-based classification and other image analysis methods were adopted to improve the classification accuracy in this study. Results show that the overall accuracies of traditional CTs for three images are 92.0%, 94.2% and 93.9%, respectively, whereas the overall accuracies of the two corresponding modified CTs for images obtained on January 21, 2015 and November 5, 2014 are 93.1% and 89.5%, respectively. This indicates modified CTs can help map macroalgae with multi-date imagery and monitor the spatiotemporal distribution of macroalgae in coastal environments.

  11. A hierarchical approach of hybrid image classification for land use and land cover mapping

    Directory of Open Access Journals (Sweden)

    Rahdari Vahid

    2018-01-01

    Full Text Available Remote sensing data analysis can provide thematic maps describing land-use and land-cover (LULC in a short period. Using proper image classification method in an area, is important to overcome the possible limitations of satellite imageries for producing land-use and land-cover maps. In the present study, a hierarchical hybrid image classification method was used to produce LULC maps using Landsat Thematic mapper TM for the year of 1998 and operational land imager OLI for the year of 2016. Images were classified using the proposed hybrid image classification method, vegetation cover crown percentage map from normalized difference vegetation index, Fisher supervised classification and object-based image classification methods. Accuracy assessment results showed that the hybrid classification method produced maps with total accuracy up to 84 percent with kappa statistic value 0.81. Results of this study showed that the proposed classification method worked better with OLI sensor than with TM. Although OLI has a higher radiometric resolution than TM, the produced LULC map using TM is almost accurate like OLI, which is because of LULC definitions and image classification methods used.

  12. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    Science.gov (United States)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  13. Automated training site selection for large-area remote-sensing image analysis

    Science.gov (United States)

    McCaffrey, Thomas M.; Franklin, Steven E.

    1993-11-01

    A computer program is presented to select training sites automatically from remotely sensed digital imagery. The basic ideas are to guide the image analyst through the process of selecting typical and representative areas for large-area image classifications by minimizing bias, and to provide an initial list of potential classes for which training sites are required to develop a classification scheme or to verify classification accuracy. Reducing subjectivity in training site selection is achieved by using a purely statistical selection of homogeneous sites which then can be compared to field knowledge, aerial photography, or other remote-sensing imagery and ancillary data to arrive at a final selection of sites to be used to train the classification decision rules. The selection of the homogeneous sites uses simple tests based on the coefficient of variance, the F-statistic, and the Student's i-statistic. Comparisons of site means are conducted with a linear growing list of previously located homogeneous pixels. The program supports a common pixel-interleaved digital image format and has been tested on aerial and satellite optical imagery. The program is coded efficiently in the C programming language and was developed under AIX-Unix on an IBM RISC 6000 24-bit color workstation.

  14. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon

    Science.gov (United States)

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio

    2009-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716

  15. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon.

    Science.gov (United States)

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio

    2008-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.

  16. Fast Segmentation and Classification of Very High Resolution Remote Sensing Data Using SLIC Superpixels

    Directory of Open Access Journals (Sweden)

    Ovidiu Csillik

    2017-03-01

    Full Text Available Speed and accuracy are important factors when dealing with time-constraint events for disaster, risk, and crisis-management support. Object-based image analysis can be a time consuming task in extracting information from large images because most of the segmentation algorithms use the pixel-grid for the initial object representation. It would be more natural and efficient to work with perceptually meaningful entities that are derived from pixels using a low-level grouping process (superpixels. Firstly, we tested a new workflow for image segmentation of remote sensing data, starting the multiresolution segmentation (MRS, using ESP2 tool from the superpixel level and aiming at reducing the amount of time needed to automatically partition relatively large datasets of very high resolution remote sensing data. Secondly, we examined whether a Random Forest classification based on an oversegmentation produced by a Simple Linear Iterative Clustering (SLIC superpixel algorithm performs similarly with reference to a traditional object-based classification regarding accuracy. Tests were applied on QuickBird and WorldView-2 data with different extents, scene content complexities, and number of bands to assess how the computational time and classification accuracy are affected by these factors. The proposed segmentation approach is compared with the traditional one, starting the MRS from the pixel level, regarding geometric accuracy of the objects and the computational time. The computational time was reduced in all cases, the biggest improvement being from 5 h 35 min to 13 min, for a WorldView-2 scene with eight bands and an extent of 12.2 million pixels, while the geometric accuracy is kept similar or slightly better. SLIC superpixel-based classification had similar or better overall accuracy values when compared to MRS-based classification, but the results were obtained in a fast manner and avoiding the parameterization of the MRS. These two approaches

  17. a Hyperspectral Image Classification Method Using Isomap and Rvm

    Science.gov (United States)

    Chang, H.; Wang, T.; Fang, H.; Su, Y.

    2018-04-01

    Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.

  18. Learning-based compressed sensing for infrared image super resolution

    Science.gov (United States)

    Zhao, Yao; Sui, Xiubao; Chen, Qian; Wu, Shaochi

    2016-05-01

    This paper presents an infrared image super-resolution method based on compressed sensing (CS). First, the reconstruction model under the CS framework is established and a Toeplitz matrix is selected as the sensing matrix. Compared with traditional learning-based methods, the proposed method uses a set of sub-dictionaries instead of two coupled dictionaries to recover high resolution (HR) images. And Toeplitz sensing matrix allows the proposed method time-efficient. Second, all training samples are divided into several feature spaces by using the proposed adaptive k-means classification method, which is more accurate than the standard k-means method. On the basis of this approach, a complex nonlinear mapping from the HR space to low resolution (LR) space can be converted into several compact linear mappings. Finally, the relationships between HR and LR image patches can be obtained by multi-sub-dictionaries and HR infrared images are reconstructed by the input LR images and multi-sub-dictionaries. The experimental results show that the proposed method is quantitatively and qualitatively more effective than other state-of-the-art methods.

  19. Dealing with missing data in remote sensing images within land and crop classification

    Science.gov (United States)

    Skakun, Sergii; Kussul, Nataliia; Basarab, Ruslan

    of non-missing data to the subspace vectors in the map. Restoration of the missing values is performed in the following way. The multi-temporal pixel values (with gaps) are put to the neural network. A neuron-winner (or a best matching unit, BMU) in the SOM is selected based on the distance metric (for example, Euclidian). It should be noted that missing values are omitted from metric estimation when selecting BMU. When the BMU is selected, missing values are substituted by corresponding components of the BMU values. The efficiency of the proposed approach was tested on a time-series of Landsat-8 images over the JECAM test site in Ukraine and Sich-2 images over Crimea (Sich-2 is Ukrainian remote sensing satellite acquiring images at 8m spatial resolution). Landsat-8 images were first converted to the TOA reflectance, and then were atmospherically corrected so each pixel value represents a surface reflectance in the range from 0 to 1. The error of reconstruction (error of quantization) on training data was: band-2: 0.015; band-3: 0.020; band-4: 0.026; band-5: 0.070; band-6: 0.060; band-7: 0.055. The reconstructed images were also used for crop classification using a multi-layer perceptron (MLP). Overall accuracy was 85.98% and Cohen's kappa was 0.83. References. 1. Skakun, S., Kussul, N., Shelestov, A. and Kussul, O. “Flood Hazard and Flood Risk Assessment Using a Time Series of Satellite Images: A Case Study in Namibia,” Risk Analysis, 2013, doi: 10.1111/risa.12156. 2. Gallego, F.J., Kussul, N., Skakun, S., Kravchenko, O., Shelestov, A., Kussul, O. “Efficiency assessment of using satellite data for crop area estimation in Ukraine,” International Journal of Applied Earth Observation and Geoinformation, vol. 29, pp. 22-30, 2014. 3. Roy D.P., Ju, J., Lewis, P., Schaaf, C., Gao, F., Hansen, M., and Lindquist, E., “Multi-temporal MODIS-Landsat data fusion for relative radiometric normalization, gap filling, and prediction of Landsat data,” Remote Sensing of

  20. Classification of permafrost active layer depth from remotely sensed and topographic evidence

    International Nuclear Information System (INIS)

    Peddle, D.R.; Franklin, S.E.

    1993-01-01

    The remote detection of permafrost (perennially frozen ground) has important implications to environmental resource development, engineering studies, natural hazard prediction, and climate change research. In this study, the authors present results from two experiments into the classification of permafrost active layer depth within the zone of discontinuous permafrost in northern Canada. A new software system based on evidential reasoning was implemented to permit the integrated classification of multisource data consisting of landcover, terrain aspect, and equivalent latitude, each of which possessed different formats, data types, or statistical properties that could not be handled by conventional classification algorithms available to this study. In the first experiment, four active layer depth classes were classified using ground based measurements of the three variables with an accuracy of 83% compared to in situ soil probe determination of permafrost active layer depth at over 500 field sites. This confirmed the environmental significance of the variables selected, and provided a baseline result to which a remote sensing classification could be compared. In the second experiment, evidence for each input variable was obtained from image processing of digital SPOT imagery and a photogrammetric digital elevation model, and used to classify active layer depth with an accuracy of 79%. These results suggest the classification of evidence from remotely sensed measures of spectral response and topography may provide suitable indicators of permafrost active layer depth

  1. Watermarking techniques for electronic delivery of remote sensing images

    Science.gov (United States)

    Barni, Mauro; Bartolini, Franco; Magli, Enrico; Olmo, Gabriella

    2002-09-01

    Earth observation missions have recently attracted a growing interest, mainly due to the large number of possible applications capable of exploiting remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products. Such a need is a very crucial one, because the Internet and other public/private networks have become preferred means of data exchange. A critical issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: assessment of the requirements imposed by remote sensing applications on watermark-based copyright protection, and modification of two well-established digital watermarking techniques to meet such constraints. More specifically, the concept of near-lossless watermarking is introduced and two possible algorithms matching such a requirement are presented. Experimental results are shown to measure the impact of watermark introduction on a typical remote sensing application, i.e., unsupervised image classification.

  2. G0-WISHART Distribution Based Classification from Polarimetric SAR Images

    Science.gov (United States)

    Hu, G. C.; Zhao, Q. H.

    2017-09-01

    Enormous scientific and technical developments have been carried out to further improve the remote sensing for decades, particularly Polarimetric Synthetic Aperture Radar(PolSAR) technique, so classification method based on PolSAR images has getted much more attention from scholars and related department around the world. The multilook polarmetric G0-Wishart model is a more flexible model which describe homogeneous, heterogeneous and extremely heterogeneous regions in the image. Moreover, the polarmetric G0-Wishart distribution dose not include the modified Bessel function of the second kind. It is a kind of simple statistical distribution model with less parameter. To prove its feasibility, a process of classification has been tested with the full-polarized Synthetic Aperture Radar (SAR) image by the method. First, apply multilook polarimetric SAR data process and speckle filter to reduce speckle influence for classification result. Initially classify the image into sixteen classes by H/A/α decomposition. Using the ICM algorithm to classify feature based on the G0-Wshart distance. Qualitative and quantitative results show that the proposed method can classify polaimetric SAR data effectively and efficiently.

  3. A simple semi-automatic approach for land cover classification from multispectral remote sensing imagery.

    Directory of Open Access Journals (Sweden)

    Dong Jiang

    Full Text Available Land cover data represent a fundamental data source for various types of scientific research. The classification of land cover based on satellite data is a challenging task, and an efficient classification method is needed. In this study, an automatic scheme is proposed for the classification of land use using multispectral remote sensing images based on change detection and a semi-supervised classifier. The satellite image can be automatically classified using only the prior land cover map and existing images; therefore human involvement is reduced to a minimum, ensuring the operability of the method. The method was tested in the Qingpu District of Shanghai, China. Using Environment Satellite 1(HJ-1 images of 2009 with 30 m spatial resolution, the areas were classified into five main types of land cover based on previous land cover data and spectral features. The results agreed on validation of land cover maps well with a Kappa value of 0.79 and statistical area biases in proportion less than 6%. This study proposed a simple semi-automatic approach for land cover classification by using prior maps with satisfied accuracy, which integrated the accuracy of visual interpretation and performance of automatic classification methods. The method can be used for land cover mapping in areas lacking ground reference information or identifying rapid variation of land cover regions (such as rapid urbanization with convenience.

  4. Comparison Effectiveness of Pixel Based Classification and Object Based Classification Using High Resolution Image In Floristic Composition Mapping (Study Case: Gunung Tidar Magelang City)

    Science.gov (United States)

    Ardha Aryaguna, Prama; Danoedoro, Projo

    2016-11-01

    Developments of analysis remote sensing have same way with development of technology especially in sensor and plane. Now, a lot of image have high spatial and radiometric resolution, that's why a lot information. Vegetation object analysis such floristic composition got a lot advantage of that development. Floristic composition can be interpreted using a lot of method such pixel based classification and object based classification. The problems for pixel based method on high spatial resolution image are salt and paper who appear in result of classification. The purpose of this research are compare effectiveness between pixel based classification and object based classification for composition vegetation mapping on high resolution image Worldview-2. The results show that pixel based classification using majority 5×5 kernel windows give the highest accuracy between another classifications. The highest accuracy is 73.32% from image Worldview-2 are being radiometric corrected level surface reflectance, but for overall accuracy in every class, object based are the best between another methods. Reviewed from effectiveness aspect, pixel based are more effective then object based for vegetation composition mapping in Tidar forest.

  5. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    Science.gov (United States)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  6. A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification

    Directory of Open Access Journals (Sweden)

    Huai Yu

    2016-03-01

    Full Text Available Scene classification plays an important role in understanding high-resolution satellite (HRS remotely sensed imagery. For remotely sensed scenes, both color information and texture information provide the discriminative ability in classification tasks. In recent years, substantial performance gains in HRS image classification have been reported in the literature. One branch of research combines multiple complementary features based on various aspects such as texture, color and structure. Two methods are commonly used to combine these features: early fusion and late fusion. In this paper, we propose combining the two methods under a tree of regions and present a new descriptor to encode color, texture and structure features using a hierarchical structure-Color Binary Partition Tree (CBPT, which we call the CTS descriptor. Specifically, we first build the hierarchical representation of HRS imagery using the CBPT. Then we quantize the texture and color features of dense regions. Next, we analyze and extract the co-occurrence patterns of regions based on the hierarchical structure. Finally, we encode local descriptors to obtain the final CTS descriptor and test its discriminative capability using object categorization and scene classification with HRS images. The proposed descriptor contains the spectral, textural and structural information of the HRS imagery and is also robust to changes in illuminant color, scale, orientation and contrast. The experimental results demonstrate that the proposed CTS descriptor achieves competitive classification results compared with state-of-the-art algorithms.

  7. Cellular image classification

    CERN Document Server

    Xu, Xiang; Lin, Feng

    2017-01-01

    This book introduces new techniques for cellular image feature extraction, pattern recognition and classification. The authors use the antinuclear antibodies (ANAs) in patient serum as the subjects and the Indirect Immunofluorescence (IIF) technique as the imaging protocol to illustrate the applications of the described methods. Throughout the book, the authors provide evaluations for the proposed methods on two publicly available human epithelial (HEp-2) cell datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis. First, the reading of imaging results is significantly influenced by one’s qualification and reading systems, causing high intra- and inter-laboratory variance. The authors present a low-order LP21 fiber mode for optical single cell manipulation and imaging staining patterns of HEp-2 cells. A focused four-lobed mode distribution is stable and effective in optical...

  8. Image Processing Tools for Improved Visualization and Analysis of Remotely Sensed Images for Agriculture and Forest Classifications

    OpenAIRE

    SINHA G. R.

    2017-01-01

    This paper suggests Image Processing tools for improved visualization and better analysis of remotely sensed images. There are methods already available in literature for the purpose but the most important challenge among the limitations is lack of robustness. We propose an optimal method for image enhancement of the images using fuzzy based approaches and few optimization tools. The segmentation images subsequently obtained after de-noising will be classified into distinct information and th...

  9. Towards a framework for agent-based image analysis of remote-sensing data.

    Science.gov (United States)

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  10. Modeling Habitat Suitability of Migratory Birds from Remote Sensing Images Using Convolutional Neural Networks

    Science.gov (United States)

    Su, Jin-He; Piao, Ying-Chao; Luo, Ze; Yan, Bao-Ping

    2018-01-01

    Simple Summary The understanding of the spatio-temporal distribution of the species habitats would facilitate wildlife resource management and conservation efforts. Existing methods have poor performance due to the limited availability of training samples. More recently, location-aware sensors have been widely used to track animal movements. The aim of the study was to generate suitability maps of bar-head geese using movement data coupled with environmental parameters, such as remote sensing images and temperature data. Therefore, we modified a deep convolutional neural network for the multi-scale inputs. The results indicate that the proposed method can identify the areas with the dense goose species around Qinghai Lake. In addition, this approach might also be interesting for implementation in other species with different niche factors or in areas where biological survey data are scarce. Abstract With the application of various data acquisition devices, a large number of animal movement data can be used to label presence data in remote sensing images and predict species distribution. In this paper, a two-stage classification approach for combining movement data and moderate-resolution remote sensing images was proposed. First, we introduced a new density-based clustering method to identify stopovers from migratory birds’ movement data and generated classification samples based on the clustering result. We split the remote sensing images into 16 × 16 patches and labeled them as positive samples if they have overlap with stopovers. Second, a multi-convolution neural network model is proposed for extracting the features from temperature data and remote sensing images, respectively. Then a Support Vector Machines (SVM) model was used to combine the features together and predict classification results eventually. The experimental analysis was carried out on public Landsat 5 TM images and a GPS dataset was collected on 29 birds over three years. The results

  11. A review of supervised object-based land-cover image classification

    Science.gov (United States)

    Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue

    2017-08-01

    Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial

  12. Hybrid image classification technique for land-cover mapping in the Arctic tundra, North Slope, Alaska

    Science.gov (United States)

    Chaudhuri, Debasish

    Remotely sensed image classification techniques are very useful to understand vegetation patterns and species combination in the vast and mostly inaccessible arctic region. Previous researches that were done for mapping of land cover and vegetation in the remote areas of northern Alaska have considerably low accuracies compared to other biomes. The unique arctic tundra environment with short growing season length, cloud cover, low sun angles, snow and ice cover hinders the effectiveness of remote sensing studies. The majority of image classification research done in this area as reported in the literature used traditional unsupervised clustering technique with Landsat MSS data. It was also emphasized by previous researchers that SPOT/HRV-XS data lacked the spectral resolution to identify the small arctic tundra vegetation parcels. Thus, there is a motivation and research need to apply a new classification technique to develop an updated, detailed and accurate vegetation map at a higher spatial resolution i.e. SPOT-5 data. Traditional classification techniques in remotely sensed image interpretation are based on spectral reflectance values with an assumption of the training data being normally distributed. Hence it is difficult to add ancillary data in classification procedures to improve accuracy. The purpose of this dissertation was to develop a hybrid image classification approach that effectively integrates ancillary information into the classification process and combines ISODATA clustering, rule-based classifier and the Multilayer Perceptron (MLP) classifier which uses artificial neural network (ANN). The main goal was to find out the best possible combination or sequence of classifiers for typically classifying tundra type vegetation that yields higher accuracy than the existing classified vegetation map from SPOT data. Unsupervised ISODATA clustering and rule-based classification techniques were combined to produce an intermediate classified map which was

  13. Remote Sensing

    CERN Document Server

    Khorram, Siamak; Koch, Frank H; van der Wiele, Cynthia F

    2012-01-01

    Remote Sensing provides information on how remote sensing relates to the natural resources inventory, management, and monitoring, as well as environmental concerns. It explains the role of this new technology in current global challenges. "Remote Sensing" will discuss remotely sensed data application payloads and platforms, along with the methodologies involving image processing techniques as applied to remotely sensed data. This title provides information on image classification techniques and image registration, data integration, and data fusion techniques. How this technology applies to natural resources and environmental concerns will also be discussed.

  14. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    Science.gov (United States)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  15. APPLICATION OF FUSION WITH SAR AND OPTICAL IMAGES IN LAND USE CLASSIFICATION BASED ON SVM

    Directory of Open Access Journals (Sweden)

    C. Bao

    2012-07-01

    Full Text Available As the increment of remote sensing data with multi-space resolution, multi-spectral resolution and multi-source, data fusion technologies have been widely used in geological fields. Synthetic Aperture Radar (SAR and optical camera are two most common sensors presently. The multi-spectral optical images express spectral features of ground objects, while SAR images express backscatter information. Accuracy of the image classification could be effectively improved fusing the two kinds of images. In this paper, Terra SAR-X images and ALOS multi-spectral images were fused for land use classification. After preprocess such as geometric rectification, radiometric rectification noise suppression and so on, the two kind images were fused, and then SVM model identification method was used for land use classification. Two different fusion methods were used, one is joining SAR image into multi-spectral images as one band, and the other is direct fusing the two kind images. The former one can raise the resolution and reserve the texture information, and the latter can reserve spectral feature information and improve capability of identifying different features. The experiment results showed that accuracy of classification using fused images is better than only using multi-spectral images. Accuracy of classification about roads, habitation and water bodies was significantly improved. Compared to traditional classification method, the method of this paper for fused images with SVM classifier could achieve better results in identifying complicated land use classes, especially for small pieces ground features.

  16. A Novel Approach to Developing a Supervised Spatial Decision Support System for Image Classification: A Study of Paddy Rice Investigation

    Directory of Open Access Journals (Sweden)

    Shih-Hsun Chang

    2014-01-01

    Full Text Available Paddy rice area estimation via remote sensing techniques has been well established in recent years. Texture information and vegetation indicators are widely used to improve the classification accuracy of satellite images. Accordingly, this study employs texture information and vegetation indicators as ancillary information for classifying paddy rice through remote sensing images. In the first stage, the images are attained using a remote sensing technique and ancillary information is employed to increase the accuracy of classification. In the second stage, we decide to construct an efficient supervised classifier, which is used to evaluate the ancillary information. In the third stage, linear discriminant analysis (LDA is introduced. LDA is a well-known method for classifying images to various categories. Also, the particle swarm optimization (PSO algorithm is employed to optimize the LDA classification outcomes and increase classification performance. In the fourth stage, we discuss the strategy of selecting different window sizes and analyze particle numbers and iteration numbers with corresponding accuracy. Accordingly, a rational strategy for the combination of ancillary information is introduced. Afterwards, the PSO algorithm improves the accuracy rate from 82.26% to 89.31%. The improved accuracy results in a much lower salt-and-pepper effect in the thematic map.

  17. Lossless Compression of Classification-Map Data

    Science.gov (United States)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  18. Automatic Hierarchical Color Image Classification

    Directory of Open Access Journals (Sweden)

    Jing Huang

    2003-02-01

    Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.

  19. [Object-oriented remote sensing image classification in epidemiological studies of visceral leishmaniasis in urban areas].

    Science.gov (United States)

    Almeida, Andréa Sobral de; Werneck, Guilherme Loureiro; Resendes, Ana Paula da Costa

    2014-08-01

    This study explored the use of object-oriented classification of remote sensing imagery in epidemiological studies of visceral leishmaniasis (VL) in urban areas. To obtain temperature and environmental information, an object-oriented classification approach was applied to Landsat 5 TM scenes from the city of Teresina, Piauí State, Brazil. For 1993-1996, VL incidence rates correlated positively with census tracts covered by dense vegetation, grass/pasture, and bare soil and negatively with areas covered by water and densely populated areas. In 2001-2006, positive correlations were found with dense vegetation, grass/pasture, bare soil, and densely populated areas and negative correlations with occupied urban areas with some vegetation. Land surface temperature correlated negatively with VL incidence in both periods. Object-oriented classification can be useful to characterize landscape features associated with VL in urban areas and to help identify risk areas in order to prioritize interventions.

  20. Sensing Urban Land-Use Patterns by Integrating Google Tensorflow and Scene-Classification Models

    Science.gov (United States)

    Yao, Y.; Liang, H.; Li, X.; Zhang, J.; He, J.

    2017-09-01

    With the rapid progress of China's urbanization, research on the automatic detection of land-use patterns in Chinese cities is of substantial importance. Deep learning is an effective method to extract image features. To take advantage of the deep-learning method in detecting urban land-use patterns, we applied a transfer-learning-based remote-sensing image approach to extract and classify features. Using the Google Tensorflow framework, a powerful convolution neural network (CNN) library was created. First, the transferred model was previously trained on ImageNet, one of the largest object-image data sets, to fully develop the model's ability to generate feature vectors of standard remote-sensing land-cover data sets (UC Merced and WHU-SIRI). Then, a random-forest-based classifier was constructed and trained on these generated vectors to classify the actual urban land-use pattern on the scale of traffic analysis zones (TAZs). To avoid the multi-scale effect of remote-sensing imagery, a large random patch (LRP) method was used. The proposed method could efficiently obtain acceptable accuracy (OA = 0.794, Kappa = 0.737) for the study area. In addition, the results show that the proposed method can effectively overcome the multi-scale effect that occurs in urban land-use classification at the irregular land-parcel level. The proposed method can help planners monitor dynamic urban land use and evaluate the impact of urban-planning schemes.

  1. Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information

    Science.gov (United States)

    Jamshidpour, N.; Homayouni, S.; Safari, A.

    2017-09-01

    Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  2. GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

    Directory of Open Access Journals (Sweden)

    N. Jamshidpour

    2017-09-01

    Full Text Available Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  3. BOREAS TE-18 Landsat TM Physical Classification Image of the NSA

    Science.gov (United States)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 21-Jun-1995 was used to derive the classification. A technique was implemented that uses reflectances of various land cover types along with a geometric optical canopy model to produce spectral trajectories. These trajectories are used in a way that is similar to training data to classify the image into the different land cover classes. The data are provided in a binary, image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  4. Remote Sensing Image Analysis Without Expert Knowledge - A Web-Based Classification Tool On Top of Taverna Workflow Management System

    Science.gov (United States)

    Selsam, Peter; Schwartze, Christian

    2016-10-01

    Providing software solutions via internet has been known for quite some time and is now an increasing trend marketed as "software as a service". A lot of business units accept the new methods and streamlined IT strategies by offering web-based infrastructures for external software usage - but geospatial applications featuring very specialized services or functionalities on demand are still rare. Originally applied in desktop environments, the ILMSimage tool for remote sensing image analysis and classification was modified in its communicating structures and enabled for running on a high-power server and benefiting from Tavema software. On top, a GIS-like and web-based user interface guides the user through the different steps in ILMSimage. ILMSimage combines object oriented image segmentation with pattern recognition features. Basic image elements form a construction set to model for large image objects with diverse and complex appearance. There is no need for the user to set up detailed object definitions. Training is done by delineating one or more typical examples (templates) of the desired object using a simple vector polygon. The template can be large and does not need to be homogeneous. The template is completely independent from the segmentation. The object definition is done completely by the software.

  5. Conjugate-Gradient Neural Networks in Classification of Multisource and Very-High-Dimensional Remote Sensing Data

    Science.gov (United States)

    Benediktsson, J. A.; Swain, P. H.; Ersoy, O. K.

    1993-01-01

    Application of neural networks to classification of remote sensing data is discussed. Conventional two-layer backpropagation is found to give good results in classification of remote sensing data but is not efficient in training. A more efficient variant, based on conjugate-gradient optimization, is used for classification of multisource remote sensing and geographic data and very-high-dimensional data. The conjugate-gradient neural networks give excellent performance in classification of multisource data, but do not compare as well with statistical methods in classification of very-high-dimentional data.

  6. Classification of iconic images

    OpenAIRE

    Zrianina, Mariia; Kopf, Stephan

    2016-01-01

    Iconic images represent an abstract topic and use a presentation that is intuitively understood within a certain cultural context. For example, the abstract topic “global warming” may be represented by a polar bear standing alone on an ice floe. Such images are widely used in media and their automatic classification can help to identify high-level semantic concepts. This paper presents a system for the classification of iconic images. It uses a variation of the Bag of Visual Words approach wi...

  7. User Classification in Crowdsourcing-Based Cooperative Spectrum Sensing

    Directory of Open Access Journals (Sweden)

    Linbo Zhai

    2017-07-01

    Full Text Available This paper studies cooperative spectrum sensing based on crowdsourcing in cognitive radio networks. Since intelligent mobile users such as smartphones and tablets can sense the wireless spectrum, channel sensing tasks can be assigned to these mobile users. This is referred to as the crowdsourcing method. However, there may be some malicious mobile users that send false sensing reports deliberately, for their own purposes. False sensing reports will influence decisions about channel state. Therefore, it is necessary to classify mobile users in order to distinguish malicious users. According to the sensing reports, mobile users should not just be divided into two classes (honest and malicious. There are two reasons for this: on the one hand, honest users in different positions may have different sensing outcomes, as shadowing, multi-path fading, and other issues may influence the sensing results; on the other hand, there may be more than one type of malicious users, acting differently in the network. Therefore, it is necessary to classify mobile users into more than two classes. Due to the lack of prior information of the number of user classes, this paper casts the problem of mobile user classification as a dynamic clustering problem that is NP-hard. The paper uses the interdistance-to-intradistance ratio of clusters as the fitness function, and aims to maximize the fitness function. To cast this optimization problem, this paper proposes a distributed algorithm for user classification in order to obtain bounded close-to-optimal solutions, and analyzes the approximation ratio of the proposed algorithm. Simulations show the distributed algorithm achieves higher performance than other algorithms.

  8. Classification of Several Optically Complex Waters in China Using in Situ Remote Sensing Reflectance

    Directory of Open Access Journals (Sweden)

    Qian Shen

    2015-11-01

    Full Text Available Determining the dominant optically active substances in water bodies via classification can improve the accuracy of bio-optical and water quality parameters estimated by remote sensing. This study provides four robust centroid sets from in situ remote sensing reflectance (Rrs (λ data presenting typical optical types obtained by plugging different similarity measures into fuzzy c-means (FCM clustering. Four typical types of waters were studied: (1 highly mixed eutrophic waters, with the proportion of absorption of colored dissolved organic matter (CDOM, phytoplankton, and non-living particulate matter at approximately 20%, 20%, and 60% respectively; (2 CDOM-dominated relatively clear waters, with approximately 45% by proportion of CDOM absorption; (3 nonliving solids-dominated waters, with approximately 88% by proportion of absorption of nonliving particulate matter; and (4 cyanobacteria-composed scum. We also simulated spectra from seven ocean color satellite sensors to assess their classification ability. POLarization and Directionality of the Earth's Reflectances (POLDER, Sentinel-2A, and MEdium Resolution Imaging Spectrometer (MERIS were found to perform better than the rest. Further, a classification tree for MERIS, in which the characteristics of Rrs (709/Rrs (681, Rrs (560/Rrs (709, Rrs (560/Rrs (620, and Rrs (709/Rrs (761 are integrated, is also proposed in this paper. The overall accuracy and Kappa coefficient of the proposed classification tree are 76.2% and 0.632, respectively.

  9. 基于BP神经网络改进算法的遥感图像分类试验%Experiment on Classification of Remote Sensing Image Based on Improvement of BP Algoritm

    Institute of Scientific and Technical Information of China (English)

    石丽

    2014-01-01

    BP神经网络分类方法是一种新的模式识别方法,在。感图像分类识别处理中有良好的应用前景。本文在阐明标准BP算法及其改进算法---Levenberg-Marquardt算法的基础上,介绍了BP神经网络的。感图像分类过程,并在MATLAB平台下对基于BP神经网络的分类算法进行了试验。实验结果表明基于BP神经网络的。感图像分类方法是一种有效的图像分类方法。%The classification based on BP neural network is a new pattern recognition method and has a wide applied future in the field of remote sensing image processing. Based on discussing BP Algorithm and its Improvement-LM algorithm,this paper describes the course of the classification of remote sensing Image on BP neural network and presents the classification algorithm of BP Neural Network developed using Matlab. The experimental results demonstrate that the classification method based on BP neural network is an effective approach.

  10. Object-Oriented Semisupervised Classification of VHR Images by Combining MedLDA and a Bilateral Filter

    Directory of Open Access Journals (Sweden)

    Shi He

    2015-01-01

    Full Text Available A Bayesian hierarchical model is presented to classify very high resolution (VHR images in a semisupervised manner, in which both a maximum entropy discrimination latent Dirichlet allocation (MedLDA and a bilateral filter are combined into a novel application framework. The primary contribution of this paper is to nullify the disadvantages of traditional probabilistic topic models on pixel-level supervised information and to achieve the effective classification of VHR remote sensing images. This framework consists of the following two iterative steps. In the training stage, the model utilizes the central labeled pixel and its neighborhood, as a squared labeled image object, to train the classifiers. In the classification stage, each central unlabeled pixel with its neighborhood, as an unlabeled object, is classified as a user-provided geoobject class label with the maximum posterior probability. Gibbs sampling is adopted for model inference. The experimental results demonstrate that the proposed method outperforms two classical SVM-based supervised classification methods and probabilistic-topic-models-based classification methods.

  11. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  12. SENSING URBAN LAND-USE PATTERNS BY INTEGRATING GOOGLE TENSORFLOW AND SCENE-CLASSIFICATION MODELS

    Directory of Open Access Journals (Sweden)

    Y. Yao

    2017-09-01

    Full Text Available With the rapid progress of China’s urbanization, research on the automatic detection of land-use patterns in Chinese cities is of substantial importance. Deep learning is an effective method to extract image features. To take advantage of the deep-learning method in detecting urban land-use patterns, we applied a transfer-learning-based remote-sensing image approach to extract and classify features. Using the Google Tensorflow framework, a powerful convolution neural network (CNN library was created. First, the transferred model was previously trained on ImageNet, one of the largest object-image data sets, to fully develop the model’s ability to generate feature vectors of standard remote-sensing land-cover data sets (UC Merced and WHU-SIRI. Then, a random-forest-based classifier was constructed and trained on these generated vectors to classify the actual urban land-use pattern on the scale of traffic analysis zones (TAZs. To avoid the multi-scale effect of remote-sensing imagery, a large random patch (LRP method was used. The proposed method could efficiently obtain acceptable accuracy (OA = 0.794, Kappa = 0.737 for the study area. In addition, the results show that the proposed method can effectively overcome the multi-scale effect that occurs in urban land-use classification at the irregular land-parcel level. The proposed method can help planners monitor dynamic urban land use and evaluate the impact of urban-planning schemes.

  13. Random Forests as a tool for estimating uncertainty at pixel-level in SAR image classification

    DEFF Research Database (Denmark)

    Loosvelt, Lien; Peters, Jan; Skriver, Henning

    2012-01-01

    , we introduce Random Forests for the probabilistic mapping of vegetation from high-dimensional remote sensing data and present a comprehensive methodology to assess and analyze classification uncertainty based on the local probabilities of class membership. We apply this method to SAR image data...

  14. Joint Multi-scale Convolution Neural Network for Scene Classification of High Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    ZHENG Zhuo

    2018-05-01

    Full Text Available High resolution remote sensing imagery scene classification is important for automatic complex scene recognition, which is the key technology for military and disaster relief, etc. In this paper, we propose a novel joint multi-scale convolution neural network (JMCNN method using a limited amount of image data for high resolution remote sensing imagery scene classification. Different from traditional convolutional neural network, the proposed JMCNN is an end-to-end training model with joint enhanced high-level feature representation, which includes multi-channel feature extractor, joint multi-scale feature fusion and Softmax classifier. Multi-channel and scale convolutional extractors are used to extract scene middle features, firstly. Then, in order to achieve enhanced high-level feature representation in a limit dataset, joint multi-scale feature fusion is proposed to combine multi-channel and scale features using two feature fusions. Finally, enhanced high-level feature representation can be used for classification by Softmax. Experiments were conducted using two limit public UCM and SIRI datasets. Compared to state-of-the-art methods, the JMCNN achieved improved performance and great robustness with average accuracies of 89.3% and 88.3% on the two datasets.

  15. Remote sensing image fusion in the context of Digital Earth

    International Nuclear Information System (INIS)

    Pohl, C

    2014-01-01

    The increase in the number of operational Earth observation satellites gives remote sensing image fusion a new boost. As a powerful tool to integrate images from different sensors it enables multi-scale, multi-temporal and multi-source information extraction. Image fusion aims at providing results that cannot be obtained from a single data source alone. Instead it enables feature and information mining of higher reliability and availability. The process required to prepare remote sensing images for image fusion comprises most of the necessary steps to feed the database of Digital Earth. The virtual representation of the planet uses data and information that is referenced and corrected to suit interpretation and decision-making. The same pre-requisite is valid for image fusion, the outcome of which can directly flow into a geographical information system. The assessment and description of the quality of the results remains critical. Depending on the application and information to be extracted from multi-source images different approaches are necessary. This paper describes the process of image fusion based on a fusion and classification experiment, explains the necessary quality measures involved and shows with this example which criteria have to be considered if the results of image fusion are going to be used in Digital Earth

  16. A review and analysis of neural networks for classification of remotely sensed multispectral imagery

    Science.gov (United States)

    Paola, Justin D.; Schowengerdt, Robert A.

    1993-01-01

    A literature survey and analysis of the use of neural networks for the classification of remotely sensed multispectral imagery is presented. As part of a brief mathematical review, the backpropagation algorithm, which is the most common method of training multi-layer networks, is discussed with an emphasis on its application to pattern recognition. The analysis is divided into five aspects of neural network classification: (1) input data preprocessing, structure, and encoding; (2) output encoding and extraction of classes; (3) network architecture, (4) training algorithms; and (5) comparisons to conventional classifiers. The advantages of the neural network method over traditional classifiers are its non-parametric nature, arbitrary decision boundary capabilities, easy adaptation to different types of data and input structures, fuzzy output values that can enhance classification, and good generalization for use with multiple images. The disadvantages of the method are slow training time, inconsistent results due to random initial weights, and the requirement of obscure initialization values (e.g., learning rate and hidden layer size). Possible techniques for ameliorating these problems are discussed. It is concluded that, although the neural network method has several unique capabilities, it will become a useful tool in remote sensing only if it is made faster, more predictable, and easier to use.

  17. Remote Sensing and Imaging Physics

    Science.gov (United States)

    2012-03-07

    Program Manager AFOSR/RSE Air Force Research Laboratory Remote Sensing and Imaging Physics 7 March 2012 Report Documentation Page Form...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Remote Sensing And Imaging Physics 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...Imaging of Space Objects •Information without Imaging •Predicting the Location of Space Objects • Remote Sensing in Extreme Conditions •Propagation

  18. A DATA FIELD METHOD FOR URBAN REMOTELY SENSED IMAGERY CLASSIFICATION CONSIDERING SPATIAL CORRELATION

    Directory of Open Access Journals (Sweden)

    Y. Zhang

    2016-06-01

    Full Text Available Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary’s C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM for the classification of multi-features (e.g. the spectral feature and spatial correlation feature. In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.

  19. Lightweight Biometric Sensing for Walker Classification Using Narrowband RF Links

    Directory of Open Access Journals (Sweden)

    Tong Liu

    2017-12-01

    Full Text Available This article proposes a lightweight biometric sensing system using ubiquitous narrowband radio frequency (RF links for path-dependent walker classification. The fluctuated received signal strength (RSS sequence generated by human motion is used for feature representation. To capture the most discriminative characteristics of individuals, a three-layer RF sensing network is organized for building multiple sampling links at the most common heights of upper limbs, thighs, and lower legs. The optimal parameters of sensing configuration, such as the height of link location and number of fused links, are investigated to improve sensory data distinctions among subjects, and the experimental results suggest that the synergistic sensing by using multiple links can contribute a better performance. This is the new consideration of using RF links in building a biometric sensing system. In addition, two types of classification methods involving vector quantization (VQ and hidden Markov models (HMMs are developed and compared for closed-set walker recognition and verification. Experimental studies in indoor line-of-sight (LOS and non-line-of-sight (NLOS scenarios are conducted to validate the proposed method.

  20. Lightweight Biometric Sensing for Walker Classification Using Narrowband RF Links.

    Science.gov (United States)

    Liu, Tong; Liang, Zhuo-Qian

    2017-12-05

    This article proposes a lightweight biometric sensing system using ubiquitous narrowband radio frequency (RF) links for path-dependent walker classification. The fluctuated received signal strength (RSS) sequence generated by human motion is used for feature representation. To capture the most discriminative characteristics of individuals, a three-layer RF sensing network is organized for building multiple sampling links at the most common heights of upper limbs, thighs, and lower legs. The optimal parameters of sensing configuration, such as the height of link location and number of fused links, are investigated to improve sensory data distinctions among subjects, and the experimental results suggest that the synergistic sensing by using multiple links can contribute a better performance. This is the new consideration of using RF links in building a biometric sensing system. In addition, two types of classification methods involving vector quantization (VQ) and hidden Markov models (HMMs) are developed and compared for closed-set walker recognition and verification. Experimental studies in indoor line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios are conducted to validate the proposed method.

  1. Lightweight Biometric Sensing for Walker Classification Using Narrowband RF Links

    Science.gov (United States)

    Liang, Zhuo-qian

    2017-01-01

    This article proposes a lightweight biometric sensing system using ubiquitous narrowband radio frequency (RF) links for path-dependent walker classification. The fluctuated received signal strength (RSS) sequence generated by human motion is used for feature representation. To capture the most discriminative characteristics of individuals, a three-layer RF sensing network is organized for building multiple sampling links at the most common heights of upper limbs, thighs, and lower legs. The optimal parameters of sensing configuration, such as the height of link location and number of fused links, are investigated to improve sensory data distinctions among subjects, and the experimental results suggest that the synergistic sensing by using multiple links can contribute a better performance. This is the new consideration of using RF links in building a biometric sensing system. In addition, two types of classification methods involving vector quantization (VQ) and hidden Markov models (HMMs) are developed and compared for closed-set walker recognition and verification. Experimental studies in indoor line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios are conducted to validate the proposed method. PMID:29206188

  2. Classification of high resolution satellite images

    OpenAIRE

    Karlsson, Anders

    2003-01-01

    In this thesis the Support Vector Machine (SVM)is applied on classification of high resolution satellite images. Sveral different measures for classification, including texture mesasures, 1st order statistics, and simple contextual information were evaluated. Additionnally, the image was segmented, using an enhanced watershed method, in order to improve the classification accuracy.

  3. Adaptive threshold-based shadow masking for across-date settlement classification of panchromatic quickBird images

    CSIR Research Space (South Africa)

    Luus, FPS

    2014-06-01

    Full Text Available -1 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 6, JUNE 2014 1153 Adaptive Threshold-Based Shadow Masking for Across- Date Settlement Classification of Panchromatic QuickBird Images F. P. S. Luus, F. van den Bergh, and B. T. J. Maharaj...

  4. Spectral reflectance of carbonate sediments and application to remote sensing classification of benthic habitats

    Science.gov (United States)

    Louchard, Eric Michael

    Remote sensing is a valuable tool in marine research that has advanced to the point that images from shallow waters can be used to identify different seafloor types and create maps of benthic habitats. A major goal of this dissertation is to examine differences in spectral reflectance and create new methods of analyzing shallow water remote sensing data to identify different seafloor types quickly and accurately. Carbonate sediments were used as a model system as they presented a relatively uniform, smooth surface for measurement and are a major bottom type in tropical coral reef systems. Experimental results found that sediment reflectance varied in shape and magnitude depending on pigment content, but only varied in magnitude with variations in grain size and shape. Derivative analysis of the reflectance spectra identified wavelength regions that correlate to chlorophyll a and chlorophyllide a as well as accessory pigments, indicating differences in microbial community structure. Derivative peak height also correlated to pigment content in the sediments. In remote sensing data, chlorophyll a, chlorophyllide a, and some xanthophylls were identified in derivative spectra and could be quantified from second derivative peak height. Most accessory pigments were attenuated by the water column, however, and could not be used to quantify pigments in sediments from remote sensing images. Radiative transfer modeling of remote sensing reflectance showed that there was sufficient spectral variation to separate major sediment types, such as ooid shoals and sediment with microbial layers, from different densities of seagrass and pavement bottom communities. Both supervised classification with a spectral library and unsupervised classification with principal component analysis were used to create maps of seafloor type. The results of the experiments were promising; classified seafloor types correlated with ground truth observations taken from underwater video and were

  5. Classification of line features from remote sensing data

    OpenAIRE

    Kolankiewiczová, Soňa

    2009-01-01

    This work deals with object-based classification of high resolution data. The aim of the thesis (paper, work) is to develope an acceptable classification process of linear features (roads and railways) from high-resolution satellite images. The first part shows different approaches of the linear feature classification and compares theoretic differences between an object-oriented and a pixel-based classification. Linear feature classification was created in the second part. The high-resolution...

  6. Semantic Segmentation of Convolutional Neural Network for Supervised Classification of Multispectral Remote Sensing

    Science.gov (United States)

    Xue, L.; Liu, C.; Wu, Y.; Li, H.

    2018-04-01

    Semantic segmentation is a fundamental research in remote sensing image processing. Because of the complex maritime environment, the classification of roads, vegetation, buildings and water from remote Sensing Imagery is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there are a few of works using CNN for ground object segmentation and the results could be further improved. This paper used convolution neural network named U-Net, its structure has a contracting path and an expansive path to get high resolution output. In the network , We added BN layers, which is more conducive to the reverse pass. Moreover, after upsampling convolution , we add dropout layers to prevent overfitting. They are promoted to get more precise segmentation results. To verify this network architecture, we used a Kaggle dataset. Experimental results show that U-Net achieved good performance compared with other architectures, especially in high-resolution remote sensing imagery.

  7. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  8. A Saliency Guided Semi-Supervised Building Change Detection Method for High Resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Bin Hou

    2016-08-01

    Full Text Available Characterizations of up to date information of the Earth’s surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD methods have been developed to solve them by utilizing remote sensing (RS images. The advent of high resolution (HR remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC segmentation. Then, saliency and morphological building index (MBI extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF. Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation.

  9. Decision tree approach for classification of remotely sensed satellite ...

    Indian Academy of Sciences (India)

    sensed satellite data using open source support. Richa Sharma .... Decision tree classification techniques have been .... the USGS Earth Resource Observation Systems. (EROS) ... for shallow water, 11% were for sparse and dense built-up ...

  10. A Hidden Markov Models Approach for Crop Classification: Linking Crop Phenology to Time Series of Multi-Sensor Remote Sensing Data

    Directory of Open Access Journals (Sweden)

    Sofia Siachalou

    2015-03-01

    Full Text Available Vegetation monitoring and mapping based on multi-temporal imagery has recently received much attention due to the plethora of medium-high spatial resolution satellites and the improved classification accuracies attained compared to uni-temporal approaches. Efficient image processing strategies are needed to exploit the phenological information present in temporal image sequences and to limit data redundancy and computational complexity. Within this framework, we implement the theory of Hidden Markov Models in crop classification, based on the time-series analysis of phenological states, inferred by a sequence of remote sensing observations. More specifically, we model the dynamics of vegetation over an agricultural area of Greece, characterized by spatio-temporal heterogeneity and small-sized fields, using RapidEye and Landsat ETM+ imagery. In addition, the classification performance of image sequences with variable spatial and temporal characteristics is evaluated and compared. The classification model considering one RapidEye and four pan-sharpened Landsat ETM+ images was found superior, resulting in a conditional kappa from 0.77 to 0.94 per class and an overall accuracy of 89.7%. The results highlight the potential of the method for operational crop mapping in Euro-Mediterranean areas and provide some hints for optimal image acquisition windows regarding major crop types in Greece.

  11. Decision tree approach for classification of remotely sensed satellite

    Indian Academy of Sciences (India)

    DTC) algorithm for classification of remotely sensed satellite data (Landsat TM) using open source support. The decision tree is constructed by recursively partitioning the spectral distribution of the training dataset using WEKA, open source ...

  12. Application of Multi-Source Remote Sensing Image in Yunnan Province Grassland Resources Investigation

    Science.gov (United States)

    Li, J.; Wen, G.; Li, D.

    2018-04-01

    Trough mastering background information of Yunnan province grassland resources utilization and ecological conditions to improves grassland elaborating management capacity, it carried out grassland resource investigation work by Yunnan province agriculture department in 2017. The traditional grassland resource investigation method is ground based investigation, which is time-consuming and inefficient, especially not suitable for large scale and hard-to-reach areas. While remote sensing is low cost, wide range and efficient, which can reflect grassland resources present situation objectively. It has become indispensable grassland monitoring technology and data sources and it has got more and more recognition and application in grassland resources monitoring research. This paper researches application of multi-source remote sensing image in Yunnan province grassland resources investigation. First of all, it extracts grassland resources thematic information and conducts field investigation through BJ-2 high space resolution image segmentation. Secondly, it classifies grassland types and evaluates grassland degradation degree through high resolution characteristics of Landsat 8 image. Thirdly, it obtained grass yield model and quality classification through high resolution and wide scanning width characteristics of MODIS images and sample investigate data. Finally, it performs grassland field qualitative analysis through UAV remote sensing image. According to project area implementation, it proves that multi-source remote sensing data can be applied to the grassland resources investigation in Yunnan province and it is indispensable method.

  13. Ontology-Guided Image Interpretation for GEOBIA of High Spatial Resolution Remote Sense Imagery: A Coastal Area Case Study

    Directory of Open Access Journals (Sweden)

    Helingjie Huang

    2017-03-01

    Full Text Available Image interpretation is a major topic in the remote sensing community. With the increasing acquisition of high spatial resolution (HSR remotely sensed images, incorporating geographic object-based image analysis (GEOBIA is becoming an important sub-discipline for improving remote sensing applications. The idea of integrating the human ability to understand images inspires research related to introducing expert knowledge into image object–based interpretation. The relevant work involved three parts: (1 identification and formalization of domain knowledge; (2 image segmentation and feature extraction; and (3 matching image objects with geographic concepts. This paper presents a novel way that combines multi-scaled segmented image objects with geographic concepts to express context in an ontology-guided image interpretation. Spectral features and geometric features of a single object are extracted after segmentation and topological relationships are also used in the interpretation. Web ontology language–query language (OWL-QL formalize domain knowledge. Then the interpretation matching procedure is implemented by the OWL-QL query-answering. Compared with a supervised classification, which does not consider context, the proposed method validates two HSR images of coastal areas in China. Both the number of interpreted classes increased (19 classes over 10 classes in Case 1 and 12 classes over seven in Case 2, and the overall accuracy improved (0.77 over 0.55 in Case 1 and 0.86 over 0.65 in Case 2. The additional context of the image objects improved accuracy during image classification. The proposed approach shows the pivotal role of ontology for knowledge-guided interpretation.

  14. Remote Sensing Image Registration Using Multiple Image Features

    Directory of Open Access Journals (Sweden)

    Kun Yang

    2017-06-01

    Full Text Available Remote sensing image registration plays an important role in military and civilian fields, such as natural disaster damage assessment, military damage assessment and ground targets identification, etc. However, due to the ground relief variations and imaging viewpoint changes, non-rigid geometric distortion occurs between remote sensing images with different viewpoint, which further increases the difficulty of remote sensing image registration. To address the problem, we propose a multi-viewpoint remote sensing image registration method which contains the following contributions. (i A multiple features based finite mixture model is constructed for dealing with different types of image features. (ii Three features are combined and substituted into the mixture model to form a feature complementation, i.e., the Euclidean distance and shape context are used to measure the similarity of geometric structure, and the SIFT (scale-invariant feature transform distance which is endowed with the intensity information is used to measure the scale space extrema. (iii To prevent the ill-posed problem, a geometric constraint term is introduced into the L2E-based energy function for better behaving the non-rigid transformation. We evaluated the performances of the proposed method by three series of remote sensing images obtained from the unmanned aerial vehicle (UAV and Google Earth, and compared with five state-of-the-art methods where our method shows the best alignments in most cases.

  15. Deep learning decision fusion for the classification of urban remote sensing data

    Science.gov (United States)

    Abdi, Ghasem; Samadzadegan, Farhad; Reinartz, Peter

    2018-01-01

    Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral-spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers.

  16. Classification in Medical Imaging

    DEFF Research Database (Denmark)

    Chen, Chen

    Classification is extensively used in the context of medical image analysis for the purpose of diagnosis or prognosis. In order to classify image content correctly, one needs to extract efficient features with discriminative properties and build classifiers based on these features. In addition...... on characterizing human faces and emphysema disease in lung CT images....

  17. Semi-Automatic Classification Of Histopathological Images: Dealing With Inter-Slide Variations

    Directory of Open Access Journals (Sweden)

    Michael Gadermayr

    2016-06-01

    In case of 50 available labelled sample patches of a certain whole slide image, the overall classification rate increased from 92 % to 98 % through including the interactive labelling step. Even with only 20 labelled patches, accuracy already increased to 97 %. Without a pre-trained model, if training is performed on target domain data only, 88 % (20 labelled samples and 95 % (50 labelled samples accuracy, respectively, were obtained. If enough target domain data was available (about 20 images, the amount of source domain data was of minor relevance. The difference in outcome between a source domain training data set containing 100 patches from one whole slide image and a set containing 700 patches from seven images was lower than 1 %. Contrarily, without target domain data, the difference in accuracy was 10 % (82 % compared to 92 % between these two settings. Execution runtime between two interaction steps is significantly below one second (0.23 s, which is an important usability criterion. It proved to be beneficial to select specific target domain data in an active learning sense based on the currently available trained model. While experimental evaluation provided strong empirical evidence for increased classification performance with the proposed method, the additional manual effort can be kept at a low level. The labelling of e.g. 20 images per slide is surely less time consuming than the validation of a complete whole slide image processed with a fully automatic, but less reliable, segmentation approach. Finally, it should be highlighted that the proposed interaction protocol could easily be adapted to other histopathological classification or segmentation tasks, also for implementation in a clinical system.  

  18. Multi-granularity synthesis segmentation for high spatial resolution Remote sensing images

    International Nuclear Information System (INIS)

    Yi, Lina; Liu, Pengfei; Qiao, Xiaojun; Zhang, Xiaoning; Gao, Yuan; Feng, Boyan

    2014-01-01

    Traditional segmentation method can only partition an image in a single granularity space, with segmentation accuracy limited to the single granularity space. This paper proposes a multi-granularity synthesis segmentation method for high spatial resolution remote sensing images based on a quotient space model. Firstly, we divide the whole image area into multiple granules (regions), each region is consisted of ground objects that have similar optimal segmentation scale, and then select and synthesize the sub-optimal segmentations of each region to get the final segmentation result. To validate this method, the land cover category map is used to guide the scale synthesis of multi-scale image segmentations for Quickbird image land use classification. Firstly, the image is coarsely divided into multiple regions, each region belongs to a certain land cover category. Then multi-scale segmentation results are generated by the Mumford-Shah function based region merging method. For each land cover category, the optimal segmentation scale is selected by the supervised segmentation accuracy assessment method. Finally, the optimal scales of segmentation results are synthesized under the guide of land cover category. Experiments show that the multi-granularity synthesis segmentation can produce more accurate segmentation than that of a single granularity space and benefit the classification

  19. Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network

    Science.gov (United States)

    Cao, Xiangyong; Zhou, Feng; Xu, Lin; Meng, Deyu; Xu, Zongben; Paisley, John

    2018-05-01

    This paper presents a new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework. First, we formulate the HSI classification problem from a Bayesian perspective. Then, we adopt a convolutional neural network (CNN) to learn the posterior class distributions using a patch-wise training strategy to better use the spatial information. Next, spatial information is further considered by placing a spatial smoothness prior on the labels. Finally, we iteratively update the CNN parameters using stochastic gradient decent (SGD) and update the class labels of all pixel vectors using an alpha-expansion min-cut-based algorithm. Compared with other state-of-the-art methods, the proposed classification method achieves better performance on one synthetic dataset and two benchmark HSI datasets in a number of experimental settings.

  20. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  1. Classification of agricultural fields using time series of dual polarimetry TerraSAR-X images

    Directory of Open Access Journals (Sweden)

    S. Mirzaee

    2014-10-01

    Full Text Available Due to its special imaging characteristics, Synthetic Aperture Radar (SAR has become an important source of information for a variety of remote sensing applications dealing with environmental changes. SAR images contain information about both phase and intensity in different polarization modes, making them sensitive to geometrical structure and physical properties of the targets such as dielectric and plant water content. In this study we investigate multi temporal changes occurring to different crop types due to phenological changes using high-resolution TerraSAR-X imagers. The dataset includes 17 dual-polarimetry TSX data acquired from June 2012 to August 2013 in Lorestan province, Iran. Several features are extracted from polarized data and classified using support vector machine (SVM classifier. Training samples and different features employed in classification are also assessed in the study. Results show a satisfactory accuracy for classification which is about 0.91 in kappa coefficient.

  2. Accuracy assessment between different image classification ...

    African Journals Online (AJOL)

    What image classification does is to assign pixel to a particular land cover and land use type that has the most similar spectral signature. However, there are possibilities that different methods or algorithms of image classification of the same data set could produce appreciable variant results in the sizes, shapes and areas of ...

  3. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  4. Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification

    Science.gov (United States)

    Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.

    2018-04-01

    In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.

  5. ALEXNET FEATURE EXTRACTION AND MULTI-KERNEL LEARNING FOR OBJECTORIENTED CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    L. Ding

    2018-04-01

    Full Text Available In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.

  6. The edge-preservation multi-classifier relearning framework for the classification of high-resolution remotely sensed imagery

    Science.gov (United States)

    Han, Xiaopeng; Huang, Xin; Li, Jiayi; Li, Yansheng; Yang, Michael Ying; Gong, Jianya

    2018-04-01

    In recent years, the availability of high-resolution imagery has enabled more detailed observation of the Earth. However, it is imperative to simultaneously achieve accurate interpretation and preserve the spatial details for the classification of such high-resolution data. To this aim, we propose the edge-preservation multi-classifier relearning framework (EMRF). This multi-classifier framework is made up of support vector machine (SVM), random forest (RF), and sparse multinomial logistic regression via variable splitting and augmented Lagrangian (LORSAL) classifiers, considering their complementary characteristics. To better characterize complex scenes of remote sensing images, relearning based on landscape metrics is proposed, which iteratively quantizes both the landscape composition and spatial configuration by the use of the initial classification results. In addition, a novel tri-training strategy is proposed to solve the over-smoothing effect of relearning by means of automatic selection of training samples with low classification certainties, which always distribute in or near the edge areas. Finally, EMRF flexibly combines the strengths of relearning and tri-training via the classification certainties calculated by the probabilistic output of the respective classifiers. It should be noted that, in order to achieve an unbiased evaluation, we assessed the classification accuracy of the proposed framework using both edge and non-edge test samples. The experimental results obtained with four multispectral high-resolution images confirm the efficacy of the proposed framework, in terms of both edge and non-edge accuracy.

  7. Support Vector Machines for Hyperspectral Remote Sensing Classification

    Science.gov (United States)

    Gualtieri, J. Anthony; Cromp, R. F.

    1998-01-01

    The Support Vector Machine provides a new way to design classification algorithms which learn from examples (supervised learning) and generalize when applied to new data. We demonstrate its success on a difficult classification problem from hyperspectral remote sensing, where we obtain performances of 96%, and 87% correct for a 4 class problem, and a 16 class problem respectively. These results are somewhat better than other recent results on the same data. A key feature of this classifier is its ability to use high-dimensional data without the usual recourse to a feature selection step to reduce the dimensionality of the data. For this application, this is important, as hyperspectral data consists of several hundred contiguous spectral channels for each exemplar. We provide an introduction to this new approach, and demonstrate its application to classification of an agriculture scene.

  8. A kernel-based multi-feature image representation for histopathology image classification

    International Nuclear Information System (INIS)

    Moreno J; Caicedo J Gonzalez F

    2010-01-01

    This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of latent semantic analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, support vector machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that; the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  9. A KERNEL-BASED MULTI-FEATURE IMAGE REPRESENTATION FOR HISTOPATHOLOGY IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    J Carlos Moreno

    2010-09-01

    Full Text Available This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of Latent Semantic Analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, Support Vector Machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that, the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  10. Rapid Target Detection in High Resolution Remote Sensing Images Using Yolo Model

    Science.gov (United States)

    Wu, Z.; Chen, X.; Gao, Y.; Li, Y.

    2018-04-01

    Object detection in high resolution remote sensing images is a fundamental and challenging problem in the field of remote sensing imagery analysis for civil and military application due to the complex neighboring environments, which can cause the recognition algorithms to mistake irrelevant ground objects for target objects. Deep Convolution Neural Network(DCNN) is the hotspot in object detection for its powerful ability of feature extraction and has achieved state-of-the-art results in Computer Vision. Common pipeline of object detection based on DCNN consists of region proposal, CNN feature extraction, region classification and post processing. YOLO model frames object detection as a regression problem, using a single CNN predicts bounding boxes and class probabilities in an end-to-end way and make the predict faster. In this paper, a YOLO based model is used for object detection in high resolution sensing images. The experiments on NWPU VHR-10 dataset and our airport/airplane dataset gain from GoogleEarth show that, compare with the common pipeline, the proposed model speeds up the detection process and have good accuracy.

  11. Image-based fall detection and classification of a user with a walking support system

    Science.gov (United States)

    Taghvaei, Sajjad; Kosuge, Kazuhiro

    2017-10-01

    The classification of visual human action is important in the development of systems that interact with humans. This study investigates an image-based classification of the human state while using a walking support system to improve the safety and dependability of these systems.We categorize the possible human behavior while utilizing a walker robot into eight states (i.e., sitting, standing, walking, and five falling types), and propose two different methods, namely, normal distribution and hidden Markov models (HMMs), to detect and recognize these states. The visual feature for the state classification is the centroid position of the upper body, which is extracted from the user's depth images. The first method shows that the centroid position follows a normal distribution while walking, which can be adopted to detect any non-walking state. The second method implements HMMs to detect and recognize these states. We then measure and compare the performance of both methods. The classification results are employed to control the motion of a passive-type walker (called "RT Walker") by activating its brakes in non-walking states. Thus, the system can be used for sit/stand support and fall prevention. The experiments are performed with four subjects, including an experienced physiotherapist. Results show that the algorithm can be adapted to the new user's motion pattern within 40 s, with a fall detection rate of 96.25% and state classification rate of 81.0%. The proposed method can be implemented to other abnormality detection/classification applications that employ depth image-sensing devices.

  12. Analysis On Land Cover In Municipality Of Malang With Landsat 8 Image Through Unsupervised Classification

    Science.gov (United States)

    Nahari, R. V.; Alfita, R.

    2018-01-01

    Remote sensing technology has been widely used in the geographic information system in order to obtain data more quickly, accurately and affordably. One of the advantages of using remote sensing imagery (satellite imagery) is to analyze land cover and land use. Satellite image data used in this study were images from the Landsat 8 satellite combined with the data from the Municipality of Malang government. The satellite image was taken in July 2016. Furthermore, the method used in this study was unsupervised classification. Based on the analysis towards the satellite images and field observations, 29% of the land in the Municipality of Malang was plantation, 22% of the area was rice field, 12% was residential area, 10% was land with shrubs, and the remaining 2% was water (lake/reservoir). The shortcoming of the methods was 25% of the land in the area was unidentified because it was covered by cloud. It is expected that future researchers involve cloud removal processing to minimize unidentified area.

  13. Involvement of Machine Learning for Breast Cancer Image Classification: A Survey.

    Science.gov (United States)

    Nahid, Abdullah-Al; Kong, Yinan

    2017-01-01

    Breast cancer is one of the largest causes of women's death in the world today. Advance engineering of natural image classification techniques and Artificial Intelligence methods has largely been used for the breast-image classification task. The involvement of digital image classification allows the doctor and the physicians a second opinion, and it saves the doctors' and physicians' time. Despite the various publications on breast image classification, very few review papers are available which provide a detailed description of breast cancer image classification techniques, feature extraction and selection procedures, classification measuring parameterizations, and image classification findings. We have put a special emphasis on the Convolutional Neural Network (CNN) method for breast image classification. Along with the CNN method we have also described the involvement of the conventional Neural Network (NN), Logic Based classifiers such as the Random Forest (RF) algorithm, Support Vector Machines (SVM), Bayesian methods, and a few of the semisupervised and unsupervised methods which have been used for breast image classification.

  14. Involvement of Machine Learning for Breast Cancer Image Classification: A Survey

    Directory of Open Access Journals (Sweden)

    Abdullah-Al Nahid

    2017-01-01

    Full Text Available Breast cancer is one of the largest causes of women’s death in the world today. Advance engineering of natural image classification techniques and Artificial Intelligence methods has largely been used for the breast-image classification task. The involvement of digital image classification allows the doctor and the physicians a second opinion, and it saves the doctors’ and physicians’ time. Despite the various publications on breast image classification, very few review papers are available which provide a detailed description of breast cancer image classification techniques, feature extraction and selection procedures, classification measuring parameterizations, and image classification findings. We have put a special emphasis on the Convolutional Neural Network (CNN method for breast image classification. Along with the CNN method we have also described the involvement of the conventional Neural Network (NN, Logic Based classifiers such as the Random Forest (RF algorithm, Support Vector Machines (SVM, Bayesian methods, and a few of the semisupervised and unsupervised methods which have been used for breast image classification.

  15. Significance of perceptually relevant image decolorization for scene classification

    Science.gov (United States)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  16. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    Science.gov (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  17. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    Science.gov (United States)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  18. Semantic Document Image Classification Based on Valuable Text Pattern

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2011-01-01

    Full Text Available Knowledge extraction from detected document image is a complex problem in the field of information technology. This problem becomes more intricate when we know, a negligible percentage of the detected document images are valuable. In this paper, a segmentation-based classification algorithm is used to analysis the document image. In this algorithm, using a two-stage segmentation approach, regions of the image are detected, and then classified to document and non-document (pure region regions in the hierarchical classification. In this paper, a novel valuable definition is proposed to classify document image in to valuable or invaluable categories. The proposed algorithm is evaluated on a database consisting of the document and non-document image that provide from Internet. Experimental results show the efficiency of the proposed algorithm in the semantic document image classification. The proposed algorithm provides accuracy rate of 98.8% for valuable and invaluable document image classification problem.

  19. Hybrid Optimization of Object-Based Classification in High-Resolution Images Using Continous ANT Colony Algorithm with Emphasis on Building Detection

    Science.gov (United States)

    Tamimi, E.; Ebadi, H.; Kiani, A.

    2017-09-01

    Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.

  20. Designing sparse sensing matrix for compressive sensing to reconstruct high resolution medical images

    Directory of Open Access Journals (Sweden)

    Vibha Tiwari

    2015-12-01

    Full Text Available Compressive sensing theory enables faithful reconstruction of signals, sparse in domain $ \\Psi $, at sampling rate lesser than Nyquist criterion, while using sampling or sensing matrix $ \\Phi $ which satisfies restricted isometric property. The role played by sensing matrix $ \\Phi $ and sparsity matrix $ \\Psi $ is vital in faithful reconstruction. If the sensing matrix is dense then it takes large storage space and leads to high computational cost. In this paper, effort is made to design sparse sensing matrix with least incurred computational cost while maintaining quality of reconstructed image. The design approach followed is based on sparse block circulant matrix (SBCM with few modifications. The other used sparse sensing matrix consists of 15 ones in each column. The medical images used are acquired from US, MRI and CT modalities. The image quality measurement parameters are used to compare the performance of reconstructed medical images using various sensing matrices. It is observed that, since Gram matrix of dictionary matrix ($ \\Phi \\Psi \\mathrm{} $ is closed to identity matrix in case of proposed modified SBCM, therefore, it helps to reconstruct the medical images of very good quality.

  1. Visualization and classification in biomedical terahertz pulsed imaging

    International Nuclear Information System (INIS)

    Loeffler, Torsten; Siebert, Karsten; Czasch, Stephanie; Bauer, Tobias; Roskos, Hartmut G

    2002-01-01

    'Visualization' in imaging is the process of extracting useful information from raw data in such a way that meaningful physical contrasts are developed. 'Classification' is the subsequent process of defining parameter ranges which allow us to identify elements of images such as different tissues or different objects. In this paper, we explore techniques for visualization and classification in terahertz pulsed imaging (TPI) for biomedical applications. For archived (formalin-fixed, alcohol-dehydrated and paraffin-mounted) test samples, we investigate both time- and frequency-domain methods based on bright- and dark-field TPI. Successful tissue classification is demonstrated

  2. Sparse Detector Imaging Sensor with Two-Class Silhouette Classification

    Directory of Open Access Journals (Sweden)

    David Russomanno

    2008-12-01

    Full Text Available This paper presents the design and test of a simple active near-infrared sparse detector imaging sensor. The prototype of the sensor is novel in that it can capture remarkable silhouettes or profiles of a wide-variety of moving objects, including humans, animals, and vehicles using a sparse detector array comprised of only sixteen sensing elements deployed in a vertical configuration. The prototype sensor was built to collect silhouettes for a variety of objects and to evaluate several algorithms for classifying the data obtained from the sensor into two classes: human versus non-human. Initial tests show that the classification of individually sensed objects into two classes can be achieved with accuracy greater than ninety-nine percent (99% with a subset of the sixteen detectors using a representative dataset consisting of 512 signatures. The prototype also includes a Webservice interface such that the sensor can be tasked in a network-centric environment. The sensor appears to be a low-cost alternative to traditional, high-resolution focal plane array imaging sensors for some applications. After a power optimization study, appropriate packaging, and testing with more extensive datasets, the sensor may be a good candidate for deployment in vast geographic regions for a myriad of intelligent electronic fence and persistent surveillance applications, including perimeter security scenarios.

  3. Unsupervised feature learning for autonomous rock image classification

    Science.gov (United States)

    Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond

    2017-09-01

    Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.

  4. Classification of remotely sensed data using OCR-inspired neural network techniques. [Optical Character Recognition

    Science.gov (United States)

    Kiang, Richard K.

    1992-01-01

    Neural networks have been applied to classifications of remotely sensed data with some success. To improve the performance of this approach, an examination was made of how neural networks are applied to the optical character recognition (OCR) of handwritten digits and letters. A three-layer, feedforward network, along with techniques adopted from OCR, was used to classify Landsat-4 Thematic Mapper data. Good results were obtained. To overcome the difficulties that are characteristic of remote sensing applications and to attain significant improvements in classification accuracy, a special network architecture may be required.

  5. A Spectral-Texture Kernel-Based Classification Method for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-11-01

    Full Text Available Classification of hyperspectral images always suffers from high dimensionality and very limited labeled samples. Recently, the spectral-spatial classification has attracted considerable attention and can achieve higher classification accuracy and smoother classification maps. In this paper, a novel spectral-spatial classification method for hyperspectral images by using kernel methods is investigated. For a given hyperspectral image, the principle component analysis (PCA transform is first performed. Then, the first principle component of the input image is segmented into non-overlapping homogeneous regions by using the entropy rate superpixel (ERS algorithm. Next, the local spectral histogram model is applied to each homogeneous region to obtain the corresponding texture features. Because this step is performed within each homogenous region, instead of within a fixed-size image window, the obtained local texture features in the image are more accurate, which can effectively benefit the improvement of classification accuracy. In the following step, a contextual spectral-texture kernel is constructed by combining spectral information in the image and the extracted texture information using the linearity property of the kernel methods. Finally, the classification map is achieved by the support vector machines (SVM classifier using the proposed spectral-texture kernel. Experiments on two benchmark airborne hyperspectral datasets demonstrate that our method can effectively improve classification accuracies, even though only a very limited training sample is available. Specifically, our method can achieve from 8.26% to 15.1% higher in terms of overall accuracy than the traditional SVM classifier. The performance of our method was further compared to several state-of-the-art classification methods of hyperspectral images using objective quantitative measures and a visual qualitative evaluation.

  6. Remote sensing of aquatic vegetation distribution in Taihu Lake using an improved classification tree with modified thresholds.

    Science.gov (United States)

    Zhao, Dehua; Jiang, Hao; Yang, Tangwu; Cai, Ying; Xu, Delin; An, Shuqing

    2012-03-01

    Classification trees (CT) have been used successfully in the past to classify aquatic vegetation from spectral indices (SI) obtained from remotely-sensed images. However, applying CT models developed for certain image dates to other time periods within the same year or among different years can reduce the classification accuracy. In this study, we developed CT models with modified thresholds using extreme SI values (CT(m)) to improve the stability of the models when applying them to different time periods. A total of 903 ground-truth samples were obtained in September of 2009 and 2010 and classified as emergent, floating-leaf, or submerged vegetation or other cover types. Classification trees were developed for 2009 (Model-09) and 2010 (Model-10) using field samples and a combination of two images from winter and summer. Overall accuracies of these models were 92.8% and 94.9%, respectively, which confirmed the ability of CT analysis to map aquatic vegetation in Taihu Lake. However, Model-10 had only 58.9-71.6% classification accuracy and 31.1-58.3% agreement (i.e., pixels classified the same in the two maps) for aquatic vegetation when it was applied to image pairs from both a different time period in 2010 and a similar time period in 2009. We developed a method to estimate the effects of extrinsic (EF) and intrinsic (IF) factors on model uncertainty using Modis images. Results indicated that 71.1% of the instability in classification between time periods was due to EF, which might include changes in atmospheric conditions, sun-view angle and water quality. The remainder was due to IF, such as phenological and growth status differences between time periods. The modified version of Model-10 (i.e. CT(m)) performed better than traditional CT with different image dates. When applied to 2009 images, the CT(m) version of Model-10 had very similar thresholds and performance as Model-09, with overall accuracies of 92.8% and 90.5% for Model-09 and the CT(m) version of Model

  7. Multispectral Image classification using the theories of neural networks

    International Nuclear Information System (INIS)

    Ardisasmita, M.S.; Subki, M.I.R.

    1997-01-01

    Image classification is the one of the important part of digital image analysis. the objective of image classification is to identify and regroup the features occurring in an image into one or several classes in terms of the object. basic to the understanding of multispectral classification is the concept of the spectral response of an object as a function of the electromagnetic radiation and the wavelength of the spectrum. new approaches to classification has been developed to improve the result of analysis, these state-of-the-art classifiers are based upon the theories of neural networks. Neural network classifiers are algorithmes which mimic the computational abilities of the human brain. Artificial neurons are simple emulation's of biological neurons; they take in information from sensors or other artificial neurons, perform very simple operations on this data, and pass the result to other recognize the spectral signature of each image pixel. Neural network image classification has been divided into supervised and unsupervised training procedures. In the supervised approach, examples of each cover type can be located and the computer can compute spectral signatures to categorize all pixels in a digital image into several land cover classes. In supervised classification, spectral signatures are generated by mathematically grouping and it does not require analyst-specified training data. Thus, in the supervised approach we define useful information categories and then examine their spectral reparability; in the unsupervised approach the computer determines spectrally sapable classes and then we define thei information value

  8. HYBRID OPTIMIZATION OF OBJECT-BASED CLASSIFICATION IN HIGH-RESOLUTION IMAGES USING CONTINOUS ANT COLONY ALGORITHM WITH EMPHASIS ON BUILDING DETECTION

    Directory of Open Access Journals (Sweden)

    E. Tamimi

    2017-09-01

    Full Text Available Automatic building detection from High Spatial Resolution (HSR images is one of the most important issues in Remote Sensing (RS. Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object. These showed the superiority of the proposed method in terms of time and accuracy.

  9. Crown-Level Tree Species Classification Using Integrated Airborne Hyperspectral and LIDAR Remote Sensing Data

    Science.gov (United States)

    Wang, Z.; Wu, J.; Wang, Y.; Kong, X.; Bao, H.; Ni, Y.; Ma, L.; Jin, J.

    2018-05-01

    Mapping tree species is essential for sustainable planning as well as to improve our understanding of the role of different trees as different ecological service. However, crown-level tree species automatic classification is a challenging task due to the spectral similarity among diversified tree species, fine-scale spatial variation, shadow, and underlying objects within a crown. Advanced remote sensing data such as airborne Light Detection and Ranging (LiDAR) and hyperspectral imagery offer a great potential opportunity to derive crown spectral, structure and canopy physiological information at the individual crown scale, which can be useful for mapping tree species. In this paper, an innovative approach was developed for tree species classification at the crown level. The method utilized LiDAR data for individual tree crown delineation and morphological structure extraction, and Compact Airborne Spectrographic Imager (CASI) hyperspectral imagery for pure crown-scale spectral extraction. Specifically, four steps were include: 1) A weighted mean filtering method was developed to improve the accuracy of the smoothed Canopy Height Model (CHM) derived from LiDAR data; 2) The marker-controlled watershed segmentation algorithm was, therefore, also employed to delineate the tree-level canopy from the CHM image in this study, and then individual tree height and tree crown were calculated according to the delineated crown; 3) Spectral features within 3 × 3 neighborhood regions centered on the treetops detected by the treetop detection algorithm were derived from the spectrally normalized CASI imagery; 4) The shape characteristics related to their crown diameters and heights were established, and different crown-level tree species were classified using the combination of spectral and shape characteristics. Analysis of results suggests that the developed classification strategy in this paper (OA = 85.12 %, Kc = 0.90) performed better than LiDAR-metrics method (OA = 79

  10. Involvement of Machine Learning for Breast Cancer Image Classification: A Survey

    OpenAIRE

    Nahid, Abdullah-Al; Kong, Yinan

    2017-01-01

    Breast cancer is one of the largest causes of women’s death in the world today. Advance engineering of natural image classification techniques and Artificial Intelligence methods has largely been used for the breast-image classification task. The involvement of digital image classification allows the doctor and the physicians a second opinion, and it saves the doctors’ and physicians’ time. Despite the various publications on breast image classification, very few review papers are available w...

  11. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  12. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  13. Real-time SPARSE-SENSE cardiac cine MR imaging: optimization of image reconstruction and sequence validation.

    Science.gov (United States)

    Goebel, Juliane; Nensa, Felix; Bomas, Bettina; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai

    2016-12-01

    Improved real-time cardiac magnetic resonance (CMR) sequences have currently been introduced, but so far only limited practical experience exists. This study aimed at image reconstruction optimization and clinical validation of a new highly accelerated real-time cine SPARSE-SENSE sequence. Left ventricular (LV) short-axis stacks of a real-time free-breathing SPARSE-SENSE sequence with high spatiotemporal resolution and of a standard segmented cine SSFP sequence were acquired at 1.5 T in 11 volunteers and 15 patients. To determine the optimal iterations, all volunteers' SPARSE-SENSE images were reconstructed using 10-200 iterations, and contrast ratios, image entropies, and reconstruction times were assessed. Subsequently, the patients' SPARSE-SENSE images were reconstructed with the clinically optimal iterations. LV volumetric values were evaluated and compared between both sequences. Sufficient image quality and acceptable reconstruction times were achieved when using 80 iterations. Bland-Altman plots and Passing-Bablok regression showed good agreement for all volumetric parameters. 80 iterations are recommended for iterative SPARSE-SENSE image reconstruction in clinical routine. Real-time cine SPARSE-SENSE yielded comparable volumetric results as the current standard SSFP sequence. Due to its intrinsic low image acquisition times, real-time cine SPARSE-SENSE imaging with iterative image reconstruction seems to be an attractive alternative for LV function analysis. • A highly accelerated real-time CMR sequence using SPARSE-SENSE was evaluated. • SPARSE-SENSE allows free breathing in real-time cardiac cine imaging. • For clinically optimal SPARSE-SENSE image reconstruction, 80 iterations are recommended. • Real-time SPARSE-SENSE imaging yielded comparable volumetric results as the reference SSFP sequence. • The fast SPARSE-SENSE sequence is an attractive alternative to standard SSFP sequences.

  14. Comparison of standard maximum likelihood classification and polytomous logistic regression used in remote sensing

    Science.gov (United States)

    John Hogland; Nedret Billor; Nathaniel Anderson

    2013-01-01

    Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...

  15. Electronic structure classifications using scanning tunneling microscopy conductance imaging

    International Nuclear Information System (INIS)

    Horn, K.M.; Swartzentruber, B.S.; Osbourn, G.C.; Bouchard, A.; Bartholomew, J.W.

    1998-01-01

    The electronic structure of atomic surfaces is imaged by applying multivariate image classification techniques to multibias conductance data measured using scanning tunneling microscopy. Image pixels are grouped into classes according to shared conductance characteristics. The image pixels, when color coded by class, produce an image that chemically distinguishes surface electronic features over the entire area of a multibias conductance image. Such open-quotes classedclose quotes images reveal surface features not always evident in a topograph. This article describes the experimental technique used to record multibias conductance images, how image pixels are grouped in a mathematical, classification space, how a computed grouping algorithm can be employed to group pixels with similar conductance characteristics in any number of dimensions, and finally how the quality of the resulting classed images can be evaluated using a computed, combinatorial analysis of the full dimensional space in which the classification is performed. copyright 1998 American Institute of Physics

  16. Global hierarchical classification of deepwater and wetland environments from remote sensing products

    Science.gov (United States)

    Fluet-Chouinard, E.; Lehner, B.; Aires, F.; Prigent, C.; McIntyre, P. B.

    2017-12-01

    Global surface water maps have improved in spatial and temporal resolutions through various remote sensing methods: open water extents with compiled Landsat archives and inundation with topographically downscaled multi-sensor retrievals. These time-series capture variations through time of open water and inundation without discriminating between hydrographic features (e.g. lakes, reservoirs, river channels and wetland types) as other databases have done as static representation. Available data sources present the opportunity to generate a comprehensive map and typology of aquatic environments (deepwater and wetlands) that improves on earlier digitized inventories and maps. The challenge of classifying surface waters globally is to distinguishing wetland types with meaningful characteristics or proxies (hydrology, water chemistry, soils, vegetation) while accommodating limitations of remote sensing data. We present a new wetland classification scheme designed for global application and produce a map of aquatic ecosystem types globally using state-of-the-art remote sensing products. Our classification scheme combines open water extent and expands it with downscaled multi-sensor inundation data to capture the maximal vegetated wetland extent. The hierarchical structure of the classification is modified from the Cowardin Systems (1979) developed for the USA. The first level classification is based on a combination of landscape positions and water source (e.g. lacustrine, riverine, palustrine, coastal and artificial) while the second level represents the hydrologic regime (e.g. perennial, seasonal, intermittent and waterlogged). Class-specific descriptors can further detail the wetland types with soils and vegetation cover. Our globally consistent nomenclature and top-down mapping allows for direct comparison across biogeographic regions, to upscale biogeochemical fluxes as well as other landscape level functions.

  17. 3D Convolutional Neural Networks for Crop Classification with Multi-Temporal Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Shunping Ji

    2018-01-01

    Full Text Available This study describes a novel three-dimensional (3D convolutional neural networks (CNN based method that automatically classifies crops from spatio-temporal remote sensing images. First, 3D kernel is designed according to the structure of multi-spectral multi-temporal remote sensing data. Secondly, the 3D CNN framework with fine-tuned parameters is designed for training 3D crop samples and learning spatio-temporal discriminative representations, with the full crop growth cycles being preserved. In addition, we introduce an active learning strategy to the CNN model to improve labelling accuracy up to a required threshold with the most efficiency. Finally, experiments are carried out to test the advantage of the 3D CNN, in comparison to the two-dimensional (2D CNN and other conventional methods. Our experiments show that the 3D CNN is especially suitable in characterizing the dynamics of crop growth and outperformed the other mainstream methods.

  18. Artificial neural net system for interactive tissue classification with MR imaging and image segmentation

    International Nuclear Information System (INIS)

    Clarke, L.P.; Silbiger, M.; Naylor, C.; Brown, K.

    1990-01-01

    This paper reports on the development of interactive methods for MR tissue classification that permit mathematically rigorous methods for three-dimensional image segmentation and automatic organ/tumor contouring, as required for surgical and RTP planning. The authors investigate a number of image-intensity based tissue- classification methods that make no implicit assumptions on the MR parameters and hence are not limited by image data set. Similarly, we have trained artificial neural net (ANN) systems for both supervised and unsupervised tissue classification

  19. Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Zhi He

    2017-10-01

    Full Text Available Classification of hyperspectral image (HSI is an important research topic in the remote sensing community. Significant efforts (e.g., deep learning have been concentrated on this task. However, it is still an open issue to classify the high-dimensional HSI with a limited number of training samples. In this paper, we propose a semi-supervised HSI classification method inspired by the generative adversarial networks (GANs. Unlike the supervised methods, the proposed HSI classification method is semi-supervised, which can make full use of the limited labeled samples as well as the sufficient unlabeled samples. Core ideas of the proposed method are twofold. First, the three-dimensional bilateral filter (3DBF is adopted to extract the spectral-spatial features by naturally treating the HSI as a volumetric dataset. The spatial information is integrated into the extracted features by 3DBF, which is propitious to the subsequent classification step. Second, GANs are trained on the spectral-spatial features for semi-supervised learning. A GAN contains two neural networks (i.e., generator and discriminator trained in opposition to one another. The semi-supervised learning is achieved by adding samples from the generator to the features and increasing the dimension of the classifier output. Experimental results obtained on three benchmark HSI datasets have confirmed the effectiveness of the proposed method , especially with a limited number of labeled samples.

  20. Retrieval and classification of food images.

    Science.gov (United States)

    Farinella, Giovanni Maria; Allegra, Dario; Moltisanti, Marco; Stanco, Filippo; Battiato, Sebastiano

    2016-10-01

    Automatic food understanding from images is an interesting challenge with applications in different domains. In particular, food intake monitoring is becoming more and more important because of the key role that it plays in health and market economies. In this paper, we address the study of food image processing from the perspective of Computer Vision. As first contribution we present a survey of the studies in the context of food image processing from the early attempts to the current state-of-the-art methods. Since retrieval and classification engines able to work on food images are required to build automatic systems for diet monitoring (e.g., to be embedded in wearable cameras), we focus our attention on the aspect of the representation of the food images because it plays a fundamental role in the understanding engines. The food retrieval and classification is a challenging task since the food presents high variableness and an intrinsic deformability. To properly study the peculiarities of different image representations we propose the UNICT-FD1200 dataset. It was composed of 4754 food images of 1200 distinct dishes acquired during real meals. Each food plate is acquired multiple times and the overall dataset presents both geometric and photometric variabilities. The images of the dataset have been manually labeled considering 8 categories: Appetizer, Main Course, Second Course, Single Course, Side Dish, Dessert, Breakfast, Fruit. We have performed tests employing different representations of the state-of-the-art to assess the related performances on the UNICT-FD1200 dataset. Finally, we propose a new representation based on the perceptual concept of Anti-Textons which is able to encode spatial information between Textons outperforming other representations in the context of food retrieval and Classification. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Image Fusion Technologies In Commercial Remote Sensing Packages

    OpenAIRE

    Al-Wassai, Firouz Abdullah; Kalyankar, N. V.

    2013-01-01

    Several remote sensing software packages are used to the explicit purpose of analyzing and visualizing remotely sensed data, with the developing of remote sensing sensor technologies from last ten years. Accord-ing to literature, the remote sensing is still the lack of software tools for effective information extraction from remote sensing data. So, this paper provides a state-of-art of multi-sensor image fusion technologies as well as review on the quality evaluation of the single image or f...

  2. Combining Spectral Data and a DSM from UAS-Images for Improved Classification of Non-Submerged Aquatic Vegetation

    Directory of Open Access Journals (Sweden)

    Eva Husson

    2017-03-01

    Full Text Available Monitoring of aquatic vegetation is an important component in the assessment of freshwater ecosystems. Remote sensing with unmanned aircraft systems (UASs can provide sub-decimetre-resolution aerial images and is a useful tool for detailed vegetation mapping. In a previous study, non-submerged aquatic vegetation was successfully mapped using automated classification of spectral and textural features from a true-colour UAS-orthoimage with 5-cm pixels. In the present study, height data from a digital surface model (DSM created from overlapping UAS-images has been incorporated together with the spectral and textural features from the UAS-orthoimage to test if classification accuracy can be improved further. We studied two levels of thematic detail: (a Growth forms including the classes of water, nymphaeid, and helophyte; and (b dominant taxa including seven vegetation classes. We hypothesized that the incorporation of height data together with spectral and textural features would increase classification accuracy as compared to using spectral and textural features alone, at both levels of thematic detail. We tested our hypothesis at five test sites (100 m × 100 m each with varying vegetation complexity and image quality using automated object-based image analysis in combination with Random Forest classification. Overall accuracy at each of the five test sites ranged from 78% to 87% at the growth-form level and from 66% to 85% at the dominant-taxon level. In comparison to using spectral and textural features alone, the inclusion of height data increased the overall accuracy significantly by 4%–21% for growth-forms and 3%–30% for dominant taxa. The biggest improvement gained by adding height data was observed at the test site with the most complex vegetation. Height data derived from UAS-images has a large potential to efficiently increase the accuracy of automated classification of non-submerged aquatic vegetation, indicating good possibilities

  3. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    Science.gov (United States)

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  4. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    Science.gov (United States)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  5. Attribute Learning for SAR Image Classification

    Directory of Open Access Journals (Sweden)

    Chu He

    2017-04-01

    Full Text Available This paper presents a classification approach based on attribute learning for high spatial resolution Synthetic Aperture Radar (SAR images. To explore the representative and discriminative attributes of SAR images, first, an iterative unsupervised algorithm is designed to cluster in the low-level feature space, where the maximum edge response and the ratio of mean-to-variance are included; a cross-validation step is applied to prevent overfitting. Second, the most discriminative clustering centers are sorted out to construct an attribute dictionary. By resorting to the attribute dictionary, a representation vector describing certain categories in the SAR image can be generated, which in turn is used to perform the classifying task. The experiments conducted on TerraSAR-X images indicate that those learned attributes have strong visual semantics, which are characterized by bright and dark spots, stripes, or their combinations. The classification method based on these learned attributes achieves better results.

  6. Remote-sensing image encryption in hybrid domains

    Science.gov (United States)

    Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong

    2012-04-01

    Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.

  7. Probabilistic segmentation of remotely sensed images

    NARCIS (Netherlands)

    Gorte, B.

    1998-01-01

    For information extraction from image data to create or update geographic information systems, objects are identified and labeled using an integration of segmentation and classification. This yields geometric and thematic information, respectively.

    Bayesian image

  8. Standard land-cover classification scheme for remote-sensing applications in South Africa

    CSIR Research Space (South Africa)

    Thompson, M

    1996-01-01

    Full Text Available For large areas, satellite remote-sensing techniques have now become the single most effective method for land-cover and land-use data acquisition. However, the majority of land-cover (and land-use) classification schemes used have been developed...

  9. Deep learning for image classification

    Science.gov (United States)

    McCoppin, Ryan; Rizki, Mateen

    2014-06-01

    This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.

  10. Fast Image Texture Classification Using Decision Trees

    Science.gov (United States)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  11. Multi Angle Imaging With Spectral Remote Sensing for Scene Classification

    National Research Council Canada - National Science Library

    Prasert, Sunyaruk

    2005-01-01

    .... This study analyses the BRDF (Bidirectional Reflectance Distribution Function) impact and effectiveness of texture analysis on terrain classification within Fresno County area in state of California...

  12. Three-dimensional passive sensing photon counting for object classification

    Science.gov (United States)

    Yeom, Seokwon; Javidi, Bahram; Watson, Edward

    2007-04-01

    In this keynote address, we address three-dimensional (3D) distortion-tolerant object recognition using photon-counting integral imaging (II). A photon-counting linear discriminant analysis (LDA) is discussed for classification of photon-limited images. We develop a compact distortion-tolerant recognition system based on the multiple-perspective imaging of II. Experimental and simulation results have shown that a low level of photons is sufficient to classify out-of-plane rotated objects.

  13. Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Xi Gong

    2018-03-01

    Full Text Available Remote sensing (RS scene classification is important for RS imagery semantic interpretation. Although tremendous strides have been made in RS scene classification, one of the remaining open challenges is recognizing RS scenes in low quality variance (e.g., various scales and noises. This paper proposes a deep salient feature based anti-noise transfer network (DSFATN method that effectively enhances and explores the high-level features for RS scene classification in different scales and noise conditions. In DSFATN, a novel discriminative deep salient feature (DSF is introduced by saliency-guided DSF extraction, which conducts a patch-based visual saliency (PBVS algorithm using “visual attention” mechanisms to guide pre-trained CNNs for producing the discriminative high-level features. Then, an anti-noise network is proposed to learn and enhance the robust and anti-noise structure information of RS scene by directly propagating the label information to fully-connected layers. A joint loss is used to minimize the anti-noise network by integrating anti-noise constraint and a softmax classification loss. The proposed network architecture can be easily trained with a limited amount of training data. The experiments conducted on three different scale RS scene datasets show that the DSFATN method has achieved excellent performance and great robustness in different scales and noise conditions. It obtains classification accuracy of 98.25%, 98.46%, and 98.80%, respectively, on the UC Merced Land Use Dataset (UCM, the Google image dataset of SIRI-WHU, and the SAT-6 dataset, advancing the state-of-the-art substantially.

  14. Image Classification Based on Convolutional Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2017-01-01

    Full Text Available Image classification aims to group images into corresponding semantic categories. Due to the difficulties of interclass similarity and intraclass variability, it is a challenging issue in computer vision. In this paper, an unsupervised feature learning approach called convolutional denoising sparse autoencoder (CDSAE is proposed based on the theory of visual attention mechanism and deep learning methods. Firstly, saliency detection method is utilized to get training samples for unsupervised feature learning. Next, these samples are sent to the denoising sparse autoencoder (DSAE, followed by convolutional layer and local contrast normalization layer. Generally, prior in a specific task is helpful for the task solution. Therefore, a new pooling strategy—spatial pyramid pooling (SPP fused with center-bias prior—is introduced into our approach. Experimental results on the common two image datasets (STL-10 and CIFAR-10 demonstrate that our approach is effective in image classification. They also demonstrate that none of these three components: local contrast normalization, SPP fused with center-prior, and l2 vector normalization can be excluded from our proposed approach. They jointly improve image representation and classification performance.

  15. SALIENCY-GUIDED CHANGE DETECTION OF REMOTELY SENSED IMAGES USING RANDOM FOREST

    Directory of Open Access Journals (Sweden)

    W. Feng

    2018-04-01

    Full Text Available Studies based on object-based image analysis (OBIA representing the paradigm shift in change detection (CD have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF, as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA. Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3 multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.

  16. Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest

    Science.gov (United States)

    Feng, W.; Sui, H.; Chen, X.

    2018-04-01

    Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.

  17. CROWN-LEVEL TREE SPECIES CLASSIFICATION USING INTEGRATED AIRBORNE HYPERSPECTRAL AND LIDAR REMOTE SENSING DATA

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2018-05-01

    Full Text Available Mapping tree species is essential for sustainable planning as well as to improve our understanding of the role of different trees as different ecological service. However, crown-level tree species automatic classification is a challenging task due to the spectral similarity among diversified tree species, fine-scale spatial variation, shadow, and underlying objects within a crown. Advanced remote sensing data such as airborne Light Detection and Ranging (LiDAR and hyperspectral imagery offer a great potential opportunity to derive crown spectral, structure and canopy physiological information at the individual crown scale, which can be useful for mapping tree species. In this paper, an innovative approach was developed for tree species classification at the crown level. The method utilized LiDAR data for individual tree crown delineation and morphological structure extraction, and Compact Airborne Spectrographic Imager (CASI hyperspectral imagery for pure crown-scale spectral extraction. Specifically, four steps were include: 1 A weighted mean filtering method was developed to improve the accuracy of the smoothed Canopy Height Model (CHM derived from LiDAR data; 2 The marker-controlled watershed segmentation algorithm was, therefore, also employed to delineate the tree-level canopy from the CHM image in this study, and then individual tree height and tree crown were calculated according to the delineated crown; 3 Spectral features within 3 × 3 neighborhood regions centered on the treetops detected by the treetop detection algorithm were derived from the spectrally normalized CASI imagery; 4 The shape characteristics related to their crown diameters and heights were established, and different crown-level tree species were classified using the combination of spectral and shape characteristics. Analysis of results suggests that the developed classification strategy in this paper (OA = 85.12 %, Kc = 0.90 performed better than Li

  18. Improvement of User's Accuracy Through Classification of Principal Component Images and Stacked Temporal Images

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kumar Kaushal

    2010-01-01

    The classification accuracy of the various categories on the classified remotely sensed images are usually evaluated by two different measures of accuracy, namely, producer's accuracy (PA) and user's accuracy (UA). The PA of a category indicates to what extent the reference pixels of the category are correctly classified, whereas the UA ora category represents to what extent the other categories are less misclassified into the category in question. Therefore, the UA of the various categories determines the reliability of their interpretation on the classified image and is more important to the analyst than the PA. The present investigation has been performed in order to determine ifthere occurs improvement in the UA of the various categories on the classified image of the principal components of the original bands and on the classified image of the stacked image of two different years. We performed the analyses using the IRS LISS Ⅲ images of two different years, i.e., 1996 and 2009, that represent the different magnitude of urbanization and the stacked image of these two years pertaining to Ranchi area, Jharkhand, India, with a view to assessing the impacts of urbanization on the UA of the different categories. The results of the investigation demonstrated that there occurs significant improvement in the UA of the impervious categories in the classified image of the stacked image, which is attributable to the aggregation of the spectral information from twice the number of bands from two different years. On the other hand, the classified image of the principal components did not show any improvement in the UA as compared to the original images.

  19. Classifications of objects on hyperspectral images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    . In the present work a classification method that combines classic image classification approach and MIA is proposed. The basic idea is to group all pixels and calculate spectral properties of the pixel group to be used further as a vector of predictors for calibration and class prediction. The grouping can...... be done with mathematical morphology methods applied to a score image where objects are well separated. In the case of small overlapping a watershed transformation can be applied to disjoint the objects. The method has been tested on several simulated and real cases and showed good results and significant...... improvements in comparison with a standard MIA approach. The results as well as method details will be reported....

  20. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  1. Cascade classification of endocytoscopic images of colorectal lesions for automated pathological diagnosis

    Science.gov (United States)

    Itoh, Hayato; Mori, Yuichi; Misawa, Masashi; Oda, Masahiro; Kudo, Shin-ei; Mori, Kensaku

    2018-02-01

    This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.

  2. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  3. Object-Based Crop Species Classification Based on the Combination of Airborne Hyperspectral Images and LiDAR Data

    Directory of Open Access Journals (Sweden)

    Xiaolong Liu

    2015-01-01

    Full Text Available Identification of crop species is an important issue in agricultural management. In recent years, many studies have explored this topic using multi-spectral and hyperspectral remote sensing data. In this study, we perform dedicated research to propose a framework for mapping crop species by combining hyperspectral and Light Detection and Ranging (LiDAR data in an object-based image analysis (OBIA paradigm. The aims of this work were the following: (i to understand the performances of different spectral dimension-reduced features from hyperspectral data and their combination with LiDAR derived height information in image segmentation; (ii to understand what classification accuracies of crop species can be achieved by combining hyperspectral and LiDAR data in an OBIA paradigm, especially in regions that have fragmented agricultural landscape and complicated crop planting structure; and (iii to understand the contributions of the crop height that is derived from LiDAR data, as well as the geometric and textural features of image objects, to the crop species’ separabilities. The study region was an irrigated agricultural area in the central Heihe river basin, which is characterized by many crop species, complicated crop planting structures, and fragmented landscape. The airborne hyperspectral data acquired by the Compact Airborne Spectrographic Imager (CASI with a 1 m spatial resolution and the Canopy Height Model (CHM data derived from the LiDAR data acquired by the airborne Leica ALS70 LiDAR system were used for this study. The image segmentation accuracies of different feature combination schemes (very high-resolution imagery (VHR, VHR/CHM, and minimum noise fractional transformed data (MNF/CHM were evaluated and analyzed. The results showed that VHR/CHM outperformed the other two combination schemes with a segmentation accuracy of 84.8%. The object-based crop species classification results of different feature integrations indicated that

  4. Remote sensing image segmentation based on Hadoop cloud platform

    Science.gov (United States)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  5. Deep learning for tumor classification in imaging mass spectrometry.

    Science.gov (United States)

    Behrmann, Jens; Etmann, Christian; Boskamp, Tobias; Casadonte, Rita; Kriegsmann, Jörg; Maaß, Peter

    2018-04-01

    Tumor classification using imaging mass spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Since mass spectra exhibit certain structural similarities to image data, deep learning may offer a promising strategy for classification of IMS data as it has been successfully applied to image classification. Methodologically, we propose an adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis. The proposed methods are evaluated on two algorithmically challenging tumor classification tasks and compared to a baseline approach. Competitiveness of the proposed methods is shown on both tasks by studying the performance via cross-validation. Moreover, the learned models are analyzed by the proposed sensitivity analysis revealing biologically plausible effects as well as confounding factors of the considered tasks. Thus, this study may serve as a starting point for further development of deep learning approaches in IMS classification tasks. https://gitlab.informatik.uni-bremen.de/digipath/Deep_Learning_for_Tumor_Classification_in_IMS. jbehrmann@uni-bremen.de or christianetmann@uni-bremen.de. Supplementary data are available at Bioinformatics online.

  6. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    Science.gov (United States)

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  7. Hyperspectral band selection and classification of Hyperion image of Bhitarkanika mangrove ecosystem, eastern India

    Science.gov (United States)

    Ashokkumar, L.; Shanmugam, S.

    2014-10-01

    Tropical mangrove forests along the coast evolve dynamically due to constant changes in the natural ecosystem and ecological cycle. Remote sensing has paved the way for periodic monitoring and conservation of such floristic resources, compared to labour intensive in-situ observations. With the laboratory quality image spectra obtained from hyperspectral image data, species level discrimination in habitats and ecosystems is attainable. One of the essential steps before classification of hyperspectral image data is band selection. It is important to eliminate the redundant bands to mitigate the problems of Hughes effect that are likely to affect further image analysis and classification accuracy. This paper presents a methodology for the selection of appropriate hyperspectral bands from the EO-1 Hyperion image for the identification and mapping of mangrove species and coastal landcover types in the Bhitarkanika coastal forest region, eastern India. Band selection procedure follows class based elimination procedure and the separability of the classes are tested in the band selection process. Individual bands are de-correlated and redundant bands are removed from the bandwise correlation matrix. The percent contribution of class variance in each band is analysed from the factors of PCA component ranking. Spectral bands are selected from the wavelength groups and statistically tested. Further, the band selection procedure is compared with similar techniques (Band Index and Mutual information) for validation. The number of bands in the Hyperion image was reduced from 196 to 88 by the Factor-based ranking approach. Classification was performed by Support Vector Machine approach. It is observed that the proposed Factor-based ranking approach performed well in discriminating the mangrove species and other landcover units compared to the other statistical approaches. The predominant mangrove species Heritiera fomes, Excoecaria agallocha and Cynometra ramiflora are spectral

  8. Remote Sensing Digital Image Analysis An Introduction

    CERN Document Server

    Richards, John A

    2013-01-01

    Remote Sensing Digital Image Analysis provides the non-specialist with a treatment of the quantitative analysis of satellite and aircraft derived remotely sensed data. Since the first edition of the book there have been significant developments in the algorithms used for the processing and analysis of remote sensing imagery; nevertheless many of the fundamentals have substantially remained the same.  This new edition presents material that has retained value since those early days, along with new techniques that can be incorporated into an operational framework for the analysis of remote sensing data. The book is designed as a teaching text for the senior undergraduate and postgraduate student, and as a fundamental treatment for those engaged in research using digital image processing in remote sensing.  The presentation level is for the mathematical non-specialist.  Since the very great number of operational users of remote sensing come from the earth sciences communities, the text is pitched at a leve...

  9. Hyperspectral image classification based on local binary patterns and PCANet

    Science.gov (United States)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  10. Classification of quantitative light-induced fluorescence images using convolutional neural network

    NARCIS (Netherlands)

    Imangaliyev, S.; van der Veen, M.H.; Volgenant, C.M.C.; Loos, B.G.; Keijser, B.J.F.; Crielaard, W.; Levin, E.; Lintas, A.; Rovetta, S.; Verschure, P.F.M.J.; Villa, A.E.P.

    2017-01-01

    Images are an important data source for diagnosis of oral diseases. The manual classification of images may lead to suboptimal treatment procedures due to subjective errors. In this paper an image classification algorithm based on Deep Learning framework is applied to Quantitative Light-induced

  11. Supervised remote sensing image classification: An example of a ...

    African Journals Online (AJOL)

    These conventional multi-class classifiers/algorithms are usually written in programming languages such as C, C++, and python. The objective of this research is to experiment the use of a binary classifier/algorithm for multi-class remote sensing task, implemented in MATLAB. MATLAB is a programming language just like C ...

  12. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  13. ISBDD Model for Classification of Hyperspectral Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Na Li

    2018-03-01

    Full Text Available The diverse density (DD algorithm was proposed to handle the problem of low classification accuracy when training samples contain interference such as mixed pixels. The DD algorithm can learn a feature vector from training bags, which comprise instances (pixels. However, the feature vector learned by the DD algorithm cannot always effectively represent one type of ground cover. To handle this problem, an instance space-based diverse density (ISBDD model that employs a novel training strategy is proposed in this paper. In the ISBDD model, DD values of each pixel are computed instead of learning a feature vector, and as a result, the pixel can be classified according to its DD values. Airborne hyperspectral data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS sensor and the Push-broom Hyperspectral Imager (PHI are applied to evaluate the performance of the proposed model. Results show that the overall classification accuracy of ISBDD model on the AVIRIS and PHI images is up to 97.65% and 89.02%, respectively, while the kappa coefficient is up to 0.97 and 0.88, respectively.

  14. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    Science.gov (United States)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  15. A hierarchical classification scheme of psoriasis images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A two-stage hierarchical classification scheme of psoriasis lesion images is proposed. These images are basically composed of three classes: normal skin, lesion and background. The scheme combines conventional tools to separate the skin from the background in the first stage, and the lesion from...

  16. Statistical methods for segmentation and classification of images

    DEFF Research Database (Denmark)

    Rosholm, Anders

    1997-01-01

    The central matter of the present thesis is Bayesian statistical inference applied to classification of images. An initial review of Markov Random Fields relates to the modeling aspect of the indicated main subject. In that connection, emphasis is put on the relatively unknown sub-class of Pickard...... with a Pickard Random Field modeling of a considered (categorical) image phenomemon. An extension of the fast PRF based classification technique is presented. The modification introduces auto-correlation into the model of an involved noise process, which previously has been assumed independent. The suitability...... of the extended model is documented by tests on controlled image data containing auto-correlated noise....

  17. CLASSIFICATION BY USING MULTISPECTRAL POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    C. T. Liao

    2012-07-01

    Full Text Available Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  18. Classification by Using Multispectral Point Cloud Data

    Science.gov (United States)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  19. Multiview Discriminative Geometry Preserving Projection for Image Classification

    Directory of Open Access Journals (Sweden)

    Ziqiang Wang

    2014-01-01

    Full Text Available In many image classification applications, it is common to extract multiple visual features from different views to describe an image. Since different visual features have their own specific statistical properties and discriminative powers for image classification, the conventional solution for multiple view data is to concatenate these feature vectors as a new feature vector. However, this simple concatenation strategy not only ignores the complementary nature of different views, but also ends up with “curse of dimensionality.” To address this problem, we propose a novel multiview subspace learning algorithm in this paper, named multiview discriminative geometry preserving projection (MDGPP for feature extraction and classification. MDGPP can not only preserve the intraclass geometry and interclass discrimination information under a single view, but also explore the complementary property of different views to obtain a low-dimensional optimal consensus embedding by using an alternating-optimization-based iterative algorithm. Experimental results on face recognition and facial expression recognition demonstrate the effectiveness of the proposed algorithm.

  20. Study on edge-extraction of remote sensing image

    International Nuclear Information System (INIS)

    Wen Jianguang; Xiao Qing; Xu Huiping

    2005-01-01

    Image edge-extraction is an important step in image processing and recognition, and also a hot spot in science study. In this paper, based on primary methods of the remote sensing image edge-extraction, authors, for the first time, have proposed several elements which should be considered before processing. Then, the qualities of several methods in remote sensing image edge-extraction are systematically summarized. At last, taking Near Nasca area (Peru) as an example the edge-extraction of Magmatic Range is analysed. (authors)

  1. Remote Sensing of Landscapes with Spectral Images

    Science.gov (United States)

    Adams, John B.; Gillespie, Alan R.

    2006-05-01

    Remote Sensing of Landscapes with Spectral Images describes how to process and interpret spectral images using physical models to bridge the gap between the engineering and theoretical sides of remote-sensing and the world that we encounter when we venture outdoors. The emphasis is on the practical use of images rather than on theory and mathematical derivations. Examples are drawn from a variety of landscapes and interpretations are tested against the reality seen on the ground. The reader is led through analysis of real images (using figures and explanations); the examples are chosen to illustrate important aspects of the analytic framework. This textbook will form a valuable reference for graduate students and professionals in a variety of disciplines including ecology, forestry, geology, geography, urban planning, archeology and civil engineering. It is supplemented by a web-site hosting digital color versions of figures in the book as well as ancillary images (www.cambridge.org/9780521662214). Presents a coherent view of practical remote sensing, leading from imaging and field work to the generation of useful thematic maps Explains how to apply physical models to help interpret spectral images Supplemented by a website hosting digital colour versions of figures in the book, as well as additional colour figures

  2. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  3. Portable remote sensing image processing system; Kahangata remote sensing gazo shori system

    Energy Technology Data Exchange (ETDEWEB)

    Fujikawa, S; Uchida, K; Tanaka, S; Jingo, H [Dowa Engineering Co. Ltd., Tokyo (Japan); Hato, M [Earth Remote Sensing Data Analysis Center, Tokyo (Japan)

    1997-10-22

    Recently, geological analysis using remote sensing data has been put into practice due to data with high spectral resolution and high spatial resolution. There has been a remarkable increase in both software and hardware of personal computer. Software is independent of hardware due to Windows. It has become easy to develop softwares. Under such situation, a portable remote sensing image processing system coping with Window 95 has been developed. Using this system, basic image processing can be conducted, and present location can be displayed on the image in real time by linking with GPS. Accordingly, it is not required to bring printed images for the field works of image processing. This system can be used instead of topographic maps for overseas surveys. Microsoft Visual C++ ver. 2.0 is used for the software. 1 fig.

  4. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  5. AUTOMATIC APPROACH TO VHR SATELLITE IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Kupidura

    2016-06-01

    Full Text Available In this paper, we present a proposition of a fully automatic classification of VHR satellite images. Unlike the most widespread approaches: supervised classification, which requires prior defining of class signatures, or unsupervised classification, which must be followed by an interpretation of its results, the proposed method requires no human intervention except for the setting of the initial parameters. The presented approach bases on both spectral and textural analysis of the image and consists of 3 steps. The first step, the analysis of spectral data, relies on NDVI values. Its purpose is to distinguish between basic classes, such as water, vegetation and non-vegetation, which all differ significantly spectrally, thus they can be easily extracted basing on spectral analysis. The second step relies on granulometric maps. These are the product of local granulometric analysis of an image and present information on the texture of each pixel neighbourhood, depending on the texture grain. The purpose of texture analysis is to distinguish between different classes, spectrally similar, but yet of different texture, e.g. bare soil from a built-up area, or low vegetation from a wooded area. Due to the use of granulometric analysis, based on mathematical morphology opening and closing, the results are resistant to the border effect (qualifying borders of objects in an image as spaces of high texture, which affect other methods of texture analysis like GLCM statistics or fractal analysis. Therefore, the effectiveness of the analysis is relatively high. Several indices based on values of different granulometric maps have been developed to simplify the extraction of classes of different texture. The third and final step of the process relies on a vegetation index, based on near infrared and blue bands. Its purpose is to correct partially misclassified pixels. All the indices used in the classification model developed relate to reflectance values, so the

  6. An object-oriented classification method of high resolution imagery based on improved AdaTree

    International Nuclear Information System (INIS)

    Xiaohe, Zhang; Liang, Zhai; Jixian, Zhang; Huiyong, Sang

    2014-01-01

    With the popularity of the application using high spatial resolution remote sensing image, more and more studies paid attention to object-oriented classification on image segmentation as well as automatic classification after image segmentation. This paper proposed a fast method of object-oriented automatic classification. First, edge-based or FNEA-based segmentation was used to identify image objects and the values of most suitable attributes of image objects for classification were calculated. Then a certain number of samples from the image objects were selected as training data for improved AdaTree algorithm to get classification rules. Finally, the image objects could be classified easily using these rules. In the AdaTree, we mainly modified the final hypothesis to get classification rules. In the experiment with WorldView2 image, the result of the method based on AdaTree showed obvious accuracy and efficient improvement compared with the method based on SVM with the kappa coefficient achieving 0.9242

  7. Low Dimensional Representation of Fisher Vectors for Microscopy Image Classification.

    Science.gov (United States)

    Song, Yang; Li, Qing; Huang, Heng; Feng, Dagan; Chen, Mei; Cai, Weidong

    2017-08-01

    Microscopy image classification is important in various biomedical applications, such as cancer subtype identification, and protein localization for high content screening. To achieve automated and effective microscopy image classification, the representative and discriminative capability of image feature descriptors is essential. To this end, in this paper, we propose a new feature representation algorithm to facilitate automated microscopy image classification. In particular, we incorporate Fisher vector (FV) encoding with multiple types of local features that are handcrafted or learned, and we design a separation-guided dimension reduction method to reduce the descriptor dimension while increasing its discriminative capability. Our method is evaluated on four publicly available microscopy image data sets of different imaging types and applications, including the UCSB breast cancer data set, MICCAI 2015 CBTC challenge data set, and IICBU malignant lymphoma, and RNAi data sets. Our experimental results demonstrate the advantage of the proposed low-dimensional FV representation, showing consistent performance improvement over the existing state of the art and the commonly used dimension reduction techniques.

  8. Radiomic features analysis in computed tomography images of lung nodule classification.

    Directory of Open Access Journals (Sweden)

    Chia-Hung Chen

    Full Text Available Radiomics, which extract large amount of quantification image features from diagnostic medical images had been widely used for prognostication, treatment response prediction and cancer detection. The treatment options for lung nodules depend on their diagnosis, benign or malignant. Conventionally, lung nodule diagnosis is based on invasive biopsy. Recently, radiomics features, a non-invasive method based on clinical images, have shown high potential in lesion classification, treatment outcome prediction.Lung nodule classification using radiomics based on Computed Tomography (CT image data was investigated and a 4-feature signature was introduced for lung nodule classification. Retrospectively, 72 patients with 75 pulmonary nodules were collected. Radiomics feature extraction was performed on non-enhanced CT images with contours which were delineated by an experienced radiation oncologist.Among the 750 image features in each case, 76 features were found to have significant differences between benign and malignant lesions. A radiomics signature was composed of the best 4 features which included Laws_LSL_min, Laws_SLL_energy, Laws_SSL_skewness and Laws_EEL_uniformity. The accuracy using the signature in benign or malignant classification was 84% with the sensitivity of 92.85% and the specificity of 72.73%.The classification signature based on radiomics features demonstrated very good accuracy and high potential in clinical application.

  9. Time Series of Images to Improve Tree Species Classification

    Science.gov (United States)

    Miyoshi, G. T.; Imai, N. N.; de Moraes, M. V. A.; Tommaselli, A. M. G.; Näsi, R.

    2017-10-01

    Tree species classification provides valuable information to forest monitoring and management. The high floristic variation of the tree species appears as a challenging issue in the tree species classification because the vegetation characteristics changes according to the season. To help to monitor this complex environment, the imaging spectroscopy has been largely applied since the development of miniaturized sensors attached to Unmanned Aerial Vehicles (UAV). Considering the seasonal changes in forests and the higher spectral and spatial resolution acquired with sensors attached to UAV, we present the use of time series of images to classify four tree species. The study area is an Atlantic Forest area located in the western part of São Paulo State. Images were acquired in August 2015 and August 2016, generating three data sets of images: only with the image spectra of 2015; only with the image spectra of 2016; with the layer stacking of images from 2015 and 2016. Four tree species were classified using Spectral angle mapper (SAM), Spectral information divergence (SID) and Random Forest (RF). The results showed that SAM and SID caused an overfitting of the data whereas RF showed better results and the use of the layer stacking improved the classification achieving a kappa coefficient of 18.26 %.

  10. Woodland Mapping at Single-Tree Levels Using Object-Oriented Classification of Unmanned Aerial Vehicle (uav) Images

    Science.gov (United States)

    Chenari, A.; Erfanifard, Y.; Dehghani, M.; Pourghasemi, H. R.

    2017-09-01

    Remotely sensed datasets offer a reliable means to precisely estimate biophysical characteristics of individual species sparsely distributed in open woodlands. Moreover, object-oriented classification has exhibited significant advantages over different classification methods for delineation of tree crowns and recognition of species in various types of ecosystems. However, it still is unclear if this widely-used classification method can have its advantages on unmanned aerial vehicle (UAV) digital images for mapping vegetation cover at single-tree levels. In this study, UAV orthoimagery was classified using object-oriented classification method for mapping a part of wild pistachio nature reserve in Zagros open woodlands, Fars Province, Iran. This research focused on recognizing two main species of the study area (i.e., wild pistachio and wild almond) and estimating their mean crown area. The orthoimage of study area was consisted of 1,076 images with spatial resolution of 3.47 cm which was georeferenced using 12 ground control points (RMSE=8 cm) gathered by real-time kinematic (RTK) method. The results showed that the UAV orthoimagery classified by object-oriented method efficiently estimated mean crown area of wild pistachios (52.09±24.67 m2) and wild almonds (3.97±1.69 m2) with no significant difference with their observed values (α=0.05). In addition, the results showed that wild pistachios (accuracy of 0.90 and precision of 0.92) and wild almonds (accuracy of 0.90 and precision of 0.89) were well recognized by image segmentation. In general, we concluded that UAV orthoimagery can efficiently produce precise biophysical data of vegetation stands at single-tree levels, which therefore is suitable for assessment and monitoring open woodlands.

  11. MULTI-TEMPORAL CLASSIFICATION AND CHANGE DETECTION USING UAV IMAGES

    Directory of Open Access Journals (Sweden)

    S. Makuti

    2018-05-01

    Full Text Available In this paper different methodologies for the classification and change detection of UAV image blocks are explored. UAV is not only the cheapest platform for image acquisition but it is also the easiest platform to operate in repeated data collections over a changing area like a building construction site. Two change detection techniques have been evaluated in this study: the pre-classification and the post-classification algorithms. These methods are based on three main steps: feature extraction, classification and change detection. A set of state of the art features have been used in the tests: colour features (HSV, textural features (GLCM and 3D geometric features. For classification purposes Conditional Random Field (CRF has been used: the unary potential was determined using the Random Forest algorithm while the pairwise potential was defined by the fully connected CRF. In the performed tests, different feature configurations and settings have been considered to assess the performance of these methods in such challenging task. Experimental results showed that the post-classification approach outperforms the pre-classification change detection method. This was analysed using the overall accuracy, where by post classification have an accuracy of up to 62.6 % and the pre classification change detection have an accuracy of 46.5 %. These results represent a first useful indication for future works and developments.

  12. Land Cover Classification via Multitemporal Spatial Data by Deep Recurrent Neural Networks

    Science.gov (United States)

    Ienco, Dino; Gaetano, Raffaele; Dupaquier, Claire; Maurel, Pierre

    2017-10-01

    Nowadays, modern earth observation programs produce huge volumes of satellite images time series (SITS) that can be useful to monitor geographical areas through time. How to efficiently analyze such kind of information is still an open question in the remote sensing field. Recently, deep learning methods proved suitable to deal with remote sensing data mainly for scene classification (i.e. Convolutional Neural Networks - CNNs - on single images) while only very few studies exist involving temporal deep learning approaches (i.e Recurrent Neural Networks - RNNs) to deal with remote sensing time series. In this letter we evaluate the ability of Recurrent Neural Networks, in particular the Long-Short Term Memory (LSTM) model, to perform land cover classification considering multi-temporal spatial data derived from a time series of satellite images. We carried out experiments on two different datasets considering both pixel-based and object-based classification. The obtained results show that Recurrent Neural Networks are competitive compared to state-of-the-art classifiers, and may outperform classical approaches in presence of low represented and/or highly mixed classes. We also show that using the alternative feature representation generated by LSTM can improve the performances of standard classifiers.

  13. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  14. Classification of Hyperspectral Images Using Kernel Fully Constrained Least Squares

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-11-01

    Full Text Available As a widely used classifier, sparse representation classification (SRC has shown its good performance for hyperspectral image classification. Recent works have highlighted that it is the collaborative representation mechanism under SRC that makes SRC a highly effective technique for classification purposes. If the dimensionality and the discrimination capacity of a test pixel is high, other norms (e.g., ℓ 2 -norm can be used to regularize the coding coefficients, except for the sparsity ℓ 1 -norm. In this paper, we show that in the kernel space the nonnegative constraint can also play the same role, and thus suggest the investigation of kernel fully constrained least squares (KFCLS for hyperspectral image classification. Furthermore, in order to improve the classification performance of KFCLS by incorporating spatial-spectral information, we investigate two kinds of spatial-spectral methods using two regularization strategies: (1 the coefficient-level regularization strategy, and (2 the class-level regularization strategy. Experimental results conducted on four real hyperspectral images demonstrate the effectiveness of the proposed KFCLS, and show which way to incorporate spatial-spectral information efficiently in the regularization framework.

  15. Biomedical imaging modality classification using combined visual features and textual terms.

    Science.gov (United States)

    Han, Xian-Hua; Chen, Yen-Wei

    2011-01-01

    We describe an approach for the automatic modality classification in medical image retrieval task of the 2010 CLEF cross-language image retrieval campaign (ImageCLEF). This paper is focused on the process of feature extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  16. Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA. Part 1: Introduction

    Directory of Open Access Journals (Sweden)

    Andrea Baraldi

    2012-09-01

    Full Text Available According to existing literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA systems and three-stage iterative geographic object-oriented image analysis (GEOOIA systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the degree of automation, accuracy, efficiency, robustness, scalability and timeliness of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO guidelines, this methodological work is split into two parts. The present first paper provides a multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT analysis of the GEOBIA/GEOOIA approaches that augments similar analyses proposed in recent years. In line with constraints stemming from human vision, this SWOT analysis promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS image understanding system (RS-IUS, from sub-symbolic statistical model-based (inductive image segmentation to symbolic physical model-based (deductive image preliminary classification. Hence, a symbolic deductive pre-attentive vision first stage accomplishes image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the second part of this work a novel hybrid (combined deductive and inductive RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a computational theory (system design; (b information/knowledge representation; (c algorithm design; and (d implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time Satellite Image Automatic Mapper™ (SIAM™ is selected from existing literature. To the best of these authors’ knowledge, this is the first time a

  17. An unsupervised technique for optimal feature selection in attribute profiles for spectral-spatial classification of hyperspectral images

    Science.gov (United States)

    Bhardwaj, Kaushal; Patra, Swarnajyoti

    2018-04-01

    Inclusion of spatial information along with spectral features play a significant role in classification of remote sensing images. Attribute profiles have already proved their ability to represent spatial information. In order to incorporate proper spatial information, multiple attributes are required and for each attribute large profiles need to be constructed by varying the filter parameter values within a wide range. Thus, the constructed profiles that represent spectral-spatial information of an hyperspectral image have huge dimension which leads to Hughes phenomenon and increases computational burden. To mitigate these problems, this work presents an unsupervised feature selection technique that selects a subset of filtered image from the constructed high dimensional multi-attribute profile which are sufficiently informative to discriminate well among classes. In this regard the proposed technique exploits genetic algorithms (GAs). The fitness function of GAs are defined in an unsupervised way with the help of mutual information. The effectiveness of the proposed technique is assessed using one-against-all support vector machine classifier. The experiments conducted on three hyperspectral data sets show the robustness of the proposed method in terms of computation time and classification accuracy.

  18. Remote Sensing Image Enhancement Based on Non-subsampled Shearlet Transform and Parameterized Logarithmic Image Processing Model

    Directory of Open Access Journals (Sweden)

    TAO Feixiang

    2015-08-01

    Full Text Available Aiming at parts of remote sensing images with dark brightness and low contrast, a remote sensing image enhancement method based on non-subsampled Shearlet transform and parameterized logarithmic image processing model is proposed in this paper to improve the visual effects and interpretability of remote sensing images. Firstly, a remote sensing image is decomposed into a low-frequency component and high frequency components by non-subsampled Shearlet transform.Then the low frequency component is enhanced according to PLIP (parameterized logarithmic image processing model, which can improve the contrast of image, while the improved fuzzy enhancement method is used to enhance the high frequency components in order to highlight the information of edges and details. A large number of experimental results show that, compared with five kinds of image enhancement methods such as bidirectional histogram equalization method, the method based on stationary wavelet transform and the method based on non-subsampled contourlet transform, the proposed method has advantages in both subjective visual effects and objective quantitative evaluation indexes such as contrast and definition, which can more effectively improve the contrast of remote sensing image and enhance edges and texture details with better visual effects.

  19. BIOCAT: a pattern recognition platform for customizable biological image classification and annotation.

    Science.gov (United States)

    Zhou, Jie; Lamichhane, Santosh; Sterne, Gabriella; Ye, Bing; Peng, Hanchuan

    2013-10-04

    Pattern recognition algorithms are useful in bioimage informatics applications such as quantifying cellular and subcellular objects, annotating gene expressions, and classifying phenotypes. To provide effective and efficient image classification and annotation for the ever-increasing microscopic images, it is desirable to have tools that can combine and compare various algorithms, and build customizable solution for different biological problems. However, current tools often offer a limited solution in generating user-friendly and extensible tools for annotating higher dimensional images that correspond to multiple complicated categories. We develop the BIOimage Classification and Annotation Tool (BIOCAT). It is able to apply pattern recognition algorithms to two- and three-dimensional biological image sets as well as regions of interest (ROIs) in individual images for automatic classification and annotation. We also propose a 3D anisotropic wavelet feature extractor for extracting textural features from 3D images with xy-z resolution disparity. The extractor is one of the about 20 built-in algorithms of feature extractors, selectors and classifiers in BIOCAT. The algorithms are modularized so that they can be "chained" in a customizable way to form adaptive solution for various problems, and the plugin-based extensibility gives the tool an open architecture to incorporate future algorithms. We have applied BIOCAT to classification and annotation of images and ROIs of different properties with applications in cell biology and neuroscience. BIOCAT provides a user-friendly, portable platform for pattern recognition based biological image classification of two- and three- dimensional images and ROIs. We show, via diverse case studies, that different algorithms and their combinations have different suitability for various problems. The customizability of BIOCAT is thus expected to be useful for providing effective and efficient solutions for a variety of biological

  20. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng

    2015-05-28

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.

  1. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  2. Automated otolith image classification with multiple views: an evaluation on Sciaenidae.

    Science.gov (United States)

    Wong, J Y; Chu, C; Chong, V C; Dhillon, S K; Loh, K H

    2016-08-01

    Combined multiple 2D views (proximal, anterior and ventral aspects) of the sagittal otolith are proposed here as a method to capture shape information for fish classification. Classification performance of single view compared with combined 2D views show improved classification accuracy of the latter, for nine species of Sciaenidae. The effects of shape description methods (shape indices, Procrustes analysis and elliptical Fourier analysis) on classification performance were evaluated. Procrustes analysis and elliptical Fourier analysis perform better than shape indices when single view is considered, but all perform equally well with combined views. A generic content-based image retrieval (CBIR) system that ranks dissimilarity (Procrustes distance) of otolith images was built to search query images without the need for detailed information of side (left or right), aspect (proximal or distal) and direction (positive or negative) of the otolith. Methods for the development of this automated classification system are discussed. © 2016 The Fisheries Society of the British Isles.

  3. Research on active imaging information transmission technology of satellite borne quantum remote sensing

    Science.gov (United States)

    Bi, Siwen; Zhen, Ming; Yang, Song; Lin, Xuling; Wu, Zhiqiang

    2017-08-01

    According to the development and application needs of Remote Sensing Science and technology, Prof. Siwen Bi proposed quantum remote sensing. Firstly, the paper gives a brief introduction of the background of quantum remote sensing, the research status and related researches at home and abroad on the theory, information mechanism and imaging experiments of quantum remote sensing and the production of principle prototype.Then, the quantization of pure remote sensing radiation field, the state function and squeezing effect of quantum remote sensing radiation field are emphasized. It also describes the squeezing optical operator of quantum light field in active imaging information transmission experiment and imaging experiments, achieving 2-3 times higher resolution than that of coherent light detection imaging and completing the production of quantum remote sensing imaging prototype. The application of quantum remote sensing technology can significantly improve both the signal-to-noise ratio of information transmission imaging and the spatial resolution of quantum remote sensing .On the above basis, Prof.Bi proposed the technical solution of active imaging information transmission technology of satellite borne quantum remote sensing, launched researches on its system composition and operation principle and on quantum noiseless amplifying devices, providing solutions and technical basis for implementing active imaging information technology of satellite borne Quantum Remote Sensing.

  4. Classification of Dust Days by Satellite Remotely Sensed Aerosol Products

    Science.gov (United States)

    Sorek-Hammer, M.; Cohen, A.; Levy, Robert C.; Ziv, B.; Broday, D. M.

    2013-01-01

    Considerable progress in satellite remote sensing (SRS) of dust particles has been seen in the last decade. From an environmental health perspective, such an event detection, after linking it to ground particulate matter (PM) concentrations, can proxy acute exposure to respirable particles of certain properties (i.e. size, composition, and toxicity). Being affected considerably by atmospheric dust, previous studies in the Eastern Mediterranean, and in Israel in particular, have focused on mechanistic and synoptic prediction, classification, and characterization of dust events. In particular, a scheme for identifying dust days (DD) in Israel based on ground PM10 (particulate matter of size smaller than 10 nm) measurements has been suggested, which has been validated by compositional analysis. This scheme requires information regarding ground PM10 levels, which is naturally limited in places with sparse ground-monitoring coverage. In such cases, SRS may be an efficient and cost-effective alternative to ground measurements. This work demonstrates a new model for identifying DD and non-DD (NDD) over Israel based on an integration of aerosol products from different satellite platforms (Moderate Resolution Imaging Spectroradiometer (MODIS) and Ozone Monitoring Instrument (OMI)). Analysis of ground-monitoring data from 2007 to 2008 in southern Israel revealed 67 DD, with more than 88 percent occurring during winter and spring. A Classification and Regression Tree (CART) model that was applied to a database containing ground monitoring (the dependent variable) and SRS aerosol product (the independent variables) records revealed an optimal set of binary variables for the identification of DD. These variables are combinations of the following primary variables: the calendar month, ground-level relative humidity (RH), the aerosol optical depth (AOD) from MODIS, and the aerosol absorbing index (AAI) from OMI. A logistic regression that uses these variables, coded as binary

  5. Classification of radiolarian images with hand-crafted and deep features

    Science.gov (United States)

    Keçeli, Ali Seydi; Kaya, Aydın; Keçeli, Seda Uzunçimen

    2017-12-01

    Radiolarians are planktonic protozoa and are important biostratigraphic and paleoenvironmental indicators for paleogeographic reconstructions. Radiolarian paleontology still remains as a low cost and the one of the most convenient way to obtain dating of deep ocean sediments. Traditional methods for identifying radiolarians are time-consuming and cannot scale to the granularity or scope necessary for large-scale studies. Automated image classification will allow making these analyses promptly. In this study, a method for automatic radiolarian image classification is proposed on Scanning Electron Microscope (SEM) images of radiolarians to ease species identification of fossilized radiolarians. The proposed method uses both hand-crafted features like invariant moments, wavelet moments, Gabor features, basic morphological features and deep features obtained from a pre-trained Convolutional Neural Network (CNN). Feature selection is applied over deep features to reduce high dimensionality. Classification outcomes are analyzed to compare hand-crafted features, deep features, and their combinations. Results show that the deep features obtained from a pre-trained CNN are more discriminative comparing to hand-crafted ones. Additionally, feature selection utilizes to the computational cost of classification algorithms and have no negative effect on classification accuracy.

  6. Integrating remote sensing and terrain data in forest fire modeling

    Science.gov (United States)

    Medler, Michael Johns

    Forest fire policies are changing. Managers now face conflicting imperatives to re-establish pre-suppression fire regimes, while simultaneously preventing resource destruction. They must, therefore, understand the spatial patterns of fires. Geographers can facilitate this understanding by developing new techniques for mapping fire behavior. This dissertation develops such techniques for mapping recent fires and using these maps to calibrate models of potential fire hazards. In so doing, it features techniques that strive to address the inherent complexity of modeling the combinations of variables found in most ecological systems. Image processing techniques were used to stratify the elements of terrain, slope, elevation, and aspect. These stratification images were used to assure sample placement considered the role of terrain in fire behavior. Examination of multiple stratification images indicated samples were placed representatively across a controlled range of scales. The incorporation of terrain data also improved preliminary fire hazard classification accuracy by 40%, compared with remotely sensed data alone. A Kauth-Thomas transformation (KT) of pre-fire and post-fire Thematic Mapper (TM) remotely sensed data produced brightness, greenness, and wetness images. Image subtraction indicated fire induced change in brightness, greenness, and wetness. Field data guided a fuzzy classification of these change images. Because fuzzy classification can characterize a continuum of a phenomena where discrete classification may produce artificial borders, fuzzy classification was found to offer a range of fire severity information unavailable with discrete classification. These mapped fire patterns were used to calibrate a model of fire hazards for the entire mountain range. Pre-fire TM, and a digital elevation model produced a set of co-registered images. Training statistics were developed from 30 polygons associated with the previously mapped fire severity. Fuzzy

  7. Remote Sensing Image in the Application of Agricultural Tourism Planning

    Directory of Open Access Journals (Sweden)

    Guojing Fan

    2013-06-01

    Full Text Available This paper introduces the processing technology of high resolution remote sensing image, the specific making process of tourism map and different remote sensing data in the key application of tourism planning and so on. Remote sensing extracts agricultural tourism planning information, improving the scientificalness and operability of agricultural tourism planning. Therefore remote sensing image in the application of agricultural tourism planning will be the inevitable trend of tourism development.

  8. An improved optimum-path forest clustering algorithm for remote sensing image segmentation

    Science.gov (United States)

    Chen, Siya; Sun, Tieli; Yang, Fengqin; Sun, Hongguang; Guan, Yu

    2018-03-01

    Remote sensing image segmentation is a key technology for processing remote sensing images. The image segmentation results can be used for feature extraction, target identification and object description. Thus, image segmentation directly affects the subsequent processing results. This paper proposes a novel Optimum-Path Forest (OPF) clustering algorithm that can be used for remote sensing segmentation. The method utilizes the principle that the cluster centres are characterized based on their densities and the distances between the centres and samples with higher densities. A new OPF clustering algorithm probability density function is defined based on this principle and applied to remote sensing image segmentation. Experiments are conducted using five remote sensing land cover images. The experimental results illustrate that the proposed method can outperform the original OPF approach.

  9. Hyperspectral Image Enhancement and Mixture Deep-Learning Classification of Corneal Epithelium Injuries.

    Science.gov (United States)

    Noor, Siti Salwa Md; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-11-16

    In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability.

  10. Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks

    Science.gov (United States)

    Xu, Xin; Gui, Rong; Pu, Fangling

    2018-01-01

    Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods. PMID:29510499

  11. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    Science.gov (United States)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  12. BENCHMARK OF MACHINE LEARNING METHODS FOR CLASSIFICATION OF A SENTINEL-2 IMAGE

    Directory of Open Access Journals (Sweden)

    F. Pirotti

    2016-06-01

    Full Text Available Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels for testing and validating subsets. The classes used are the following: (i urban (ii sowable areas (iii water (iv tree plantations (v grasslands. Validation is carried out using three different approaches: (i using pixels from the training dataset (train, (ii using pixels from the training dataset and applying cross-validation with the k-fold method (kfold and (iii using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train, the whole control dataset (full and with k-fold cross-validation (kfold with ten folds. Results from validation of predictions of the whole dataset (full show the

  13. Classification Of Cluster Area Forsatellite Image

    Directory of Open Access Journals (Sweden)

    Thwe Zin Phyo

    2015-06-01

    Full Text Available Abstract This paper describes area classification for Landsat7 satellite image. The main purpose of this system is to classify the area of each cluster contained in a satellite image. To classify this image firstly need to clusterthe satellite image into different land cover types. Clustering is an unsupervised learning method that aimsto classify an image into homogeneous regions. This system is implemented based on color features with K-means clustering unsupervised algorithm. This method does not need to train image before clustering.The clusters of satellite image are grouped into a set of three clusters for Landsat7 satellite image. For this work the combined band 432 from Landsat7 satellite is used as an input. Satellite imageMandalay area in 2001 is chosen to test the segmentation method. After clustering a specific range for three clustered images must be defined in order to obtain greenland water and urbanbalance.This system is implemented by using MATLAB programming language.

  14. WOODLAND MAPPING AT SINGLE-TREE LEVELS USING OBJECT-ORIENTED CLASSIFICATION OF UNMANNED AERIAL VEHICLE (UAV IMAGES

    Directory of Open Access Journals (Sweden)

    A. Chenari

    2017-09-01

    Full Text Available Remotely sensed datasets offer a reliable means to precisely estimate biophysical characteristics of individual species sparsely distributed in open woodlands. Moreover, object-oriented classification has exhibited significant advantages over different classification methods for delineation of tree crowns and recognition of species in various types of ecosystems. However, it still is unclear if this widely-used classification method can have its advantages on unmanned aerial vehicle (UAV digital images for mapping vegetation cover at single-tree levels. In this study, UAV orthoimagery was classified using object-oriented classification method for mapping a part of wild pistachio nature reserve in Zagros open woodlands, Fars Province, Iran. This research focused on recognizing two main species of the study area (i.e., wild pistachio and wild almond and estimating their mean crown area. The orthoimage of study area was consisted of 1,076 images with spatial resolution of 3.47 cm which was georeferenced using 12 ground control points (RMSE=8 cm gathered by real-time kinematic (RTK method. The results showed that the UAV orthoimagery classified by object-oriented method efficiently estimated mean crown area of wild pistachios (52.09±24.67 m2 and wild almonds (3.97±1.69 m2 with no significant difference with their observed values (α=0.05. In addition, the results showed that wild pistachios (accuracy of 0.90 and precision of 0.92 and wild almonds (accuracy of 0.90 and precision of 0.89 were well recognized by image segmentation. In general, we concluded that UAV orthoimagery can efficiently produce precise biophysical data of vegetation stands at single-tree levels, which therefore is suitable for assessment and monitoring open woodlands.

  15. Classification of objects on hyperspectral images — further developments

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey V.; Williams, Paul

    Classification of objects (such as tablets, cereals, fruits, etc.) is one of the very important applications of hyperspectral imaging and image analysis. Quite often, a hyperspectral image is represented and analyzed just as a bunch of spectra without taking into account spatial information about...... the pixels, which makes classification objects inefficient. Recently, several methods, which combine spectral and spatial information, has been also developed and this approach becomes more and more wide-spread. The methods use local rank, topology, spectral features calculated for separate objects and other...... spatial characteristics. In this work we would like to show several improvements to the classification method, which utilizes spectral features calculated for individual objects [1]. The features are based (in general) on descriptors of spatial patterns of individual object’s pixels in a common principal...

  16. Automatic classification and detection of clinically relevant images for diabetic retinopathy

    Science.gov (United States)

    Xu, Xinyu; Li, Baoxin

    2008-03-01

    We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.

  17. A minimum spanning forest based classification method for dedicated breast CT images

    International Nuclear Information System (INIS)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei

    2015-01-01

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting model used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging

  18. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad

    2016-12-09

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  19. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad; Shafait, Faisal; Ghanem, Bernard; Mian, Ajmal

    2016-01-01

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  20. Texture Retrieval from VHR Optical Remote Sensed Images Using the Local Extrema Descriptor with Application to Vineyard Parcel Detection

    Directory of Open Access Journals (Sweden)

    Minh-Tan Pham

    2016-04-01

    Full Text Available In this article, we develop a novel method for the detection of vineyard parcels in agricultural landscapes based on very high resolution (VHR optical remote sensing images. Our objective is to perform texture-based image retrieval and supervised classification algorithms. To do that, the local textural and structural features inside each image are taken into account to measure its similarity to other images. In fact, VHR images usually involve a variety of local textures and structures that may verify a weak stationarity hypothesis. Hence, an approach only based on characteristic points, not on all pixels of the image, is supposed to be relevant. This work proposes to construct the local extrema-based descriptor (LED by using the local maximum and local minimum pixels extracted from the image. The LED descriptor is formed based on the radiometric, geometric and gradient features from these local extrema. We first exploit the proposed LED descriptor for the retrieval task to evaluate its performance on texture discrimination. Then, it is embedded into a supervised classification framework to detect vine parcels using VHR satellite images. Experiments performed on VHR panchromatic PLEIADES image data prove the effectiveness of the proposed strategy. Compared to state-of-the-art methods, an enhancement of about 7% in retrieval rate is achieved. For the detection task, about 90% of vineyards are correctly detected.

  1. Segmentation of Clinical Endoscopic Images Based on the Classification of Topological Vector Features

    Directory of Open Access Journals (Sweden)

    O. A. Dunaeva

    2013-01-01

    Full Text Available In this work, we describe a prototype of an automatic segmentation system and annotation of endoscopy images. The used algorithm is based on the classification of vectors of the topological features of the original image. We use the image processing scheme which includes image preprocessing, calculation of vector descriptors defined for every point of the source image and the subsequent classification of descriptors. Image preprocessing includes finding and selecting artifacts and equalizating the image brightness. In this work, we give the detailed algorithm of the construction of topological descriptors and the classifier creating procedure based on mutual sharing the AdaBoost scheme and a naive Bayes classifier. In the final section, we show the results of the classification of real endoscopic images.

  2. Urban Shanty Town Recognition Based on High-Resolution Remote Sensing Images and National Geographical Monitoring Features - a Case Study of Nanning City

    Science.gov (United States)

    He, Y.; He, Y.

    2018-04-01

    Urban shanty towns are communities that has contiguous old and dilapidated houses with more than 2000 square meters built-up area or more than 50 households. This study makes attempts to extract shanty towns in Nanning City using the product of Census and TripleSat satellite images. With 0.8-meter high-resolution remote sensing images, five texture characteristics (energy, contrast, maximum probability, and inverse difference moment) of shanty towns are trained and analyzed through GLCM. In this study, samples of shanty town are well classified with 98.2 % producer accuracy of unsupervised classification and 73.2 % supervised classification correctness. Low-rise and mid-rise residential blocks in Nanning City are classified into 4 different types by using k-means clustering and nearest neighbour classification respectively. This study initially establish texture feature descriptions of different types of residential areas, especially low-rise and mid-rise buildings, which would help city administrator evaluate residential blocks and reconstruction shanty towns.

  3. A Space-Time Periodic Task Model for Recommendation of Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Xiuhong Zhang

    2018-01-01

    Full Text Available With the rapid development of remote sensing technology, the quantity and variety of remote sensing images are growing so quickly that proactive and personalized access to data has become an inevitable trend. One of the active approaches is remote sensing image recommendation, which can offer related image products to users according to their preference. Although multiple studies on remote sensing retrieval and recommendation have been performed, most of these studies model the user profiles only from the perspective of spatial area or image features. In this paper, we propose a spatiotemporal recommendation method for remote sensing data based on the probabilistic latent topic model, which is named the Space-Time Periodic Task model (STPT. User retrieval behaviors of remote sensing images are represented as mixtures of latent tasks, which act as links between users and images. Each task is associated with the joint probability distribution of space, time and image characteristics. Meanwhile, the von Mises distribution is introduced to fit the distribution of tasks over time. Then, we adopt Gibbs sampling to learn the random variables and parameters and present the inference algorithm for our model. Experiments show that the proposed STPT model can improve the capability and efficiency of remote sensing image data services.

  4. Remote Sensing of Irrigated Agriculture: Opportunities and Challenges

    Directory of Open Access Journals (Sweden)

    Chelsea Cervantes

    2010-09-01

    Full Text Available Over the last several decades, remote sensing has emerged as an effective tool to monitor irrigated lands over a variety of climatic conditions and locations. The objective of this review, which summarizes the methods and the results of existing remote sensing studies, is to synthesize principle findings and assess the state of the art. We take a taxonomic approach to group studies based on location, scale, inputs, and methods, in an effort to categorize different approaches within a logical framework. We seek to evaluate the ability of remote sensing to provide synoptic and timely coverage of irrigated lands in several spectral regions. We also investigate the value of archived data that enable comparison of images through time. This overview of the studies to date indicates that remote sensing-based monitoring of irrigation is at an intermediate stage of development at local scales. For instance, there is overwhelming consensus on the efficacy of vegetation indices in identifying irrigated fields. Also, single date imagery, acquired at peak growing season, may suffice to identify irrigated lands, although to multi-date image data are necessary for improved classification and to distinguish different crop types. At local scales, the mapping of irrigated lands with remote sensing is also strongly affected by the timing of image acquisition and the number of images used. At the regional and global scales, on the other hand, remote sensing has not been fully operational, as methods that work in one place and time are not necessarily transferable to other locations and periods. Thus, at larger scales, more work is required to indentify the best spectral indices, best time periods, and best classification methods under different climatological and cultural environments. Existing studies at regional scales also establish the fact that both remote sensing and national statistical approaches require further refinement with a substantial investment of

  5. Accurate estimation of motion blur parameters in noisy remote sensing image

    Science.gov (United States)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  6. Classification of Large-Scale Remote Sensing Images for Automatic Identification of Health Hazards: Smoke Detection Using an Autologistic Regression Classifier.

    Science.gov (United States)

    Wolters, Mark A; Dean, C B

    2017-01-01

    Remote sensing images from Earth-orbiting satellites are a potentially rich data source for monitoring and cataloguing atmospheric health hazards that cover large geographic regions. A method is proposed for classifying such images into hazard and nonhazard regions using the autologistic regression model, which may be viewed as a spatial extension of logistic regression. The method includes a novel and simple approach to parameter estimation that makes it well suited to handling the large and high-dimensional datasets arising from satellite-borne instruments. The methodology is demonstrated on both simulated images and a real application to the identification of forest fire smoke.

  7. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  8. Remote classification from an airborne camera using image super-resolution.

    Science.gov (United States)

    Woods, Matthew; Katsaggelos, Aggelos

    2017-02-01

    The image processing technique known as super-resolution (SR), which attempts to increase the effective pixel sampling density of a digital imager, has gained rapid popularity over the last decade. The majority of literature focuses on its ability to provide results that are visually pleasing to a human observer. In this paper, we instead examine the ability of SR to improve the resolution-critical capability of an imaging system to perform a classification task from a remote location, specifically from an airborne camera. In order to focus the scope of the study, we address and quantify results for the narrow case of text classification. However, we expect the results generalize to a large set of related, remote classification tasks. We generate theoretical results through simulation, which are corroborated by experiments with a camera mounted on a DJI Phantom 3 quadcopter.

  9. Integrating image processing and classification technology into automated polarizing film defect inspection

    Science.gov (United States)

    Kuo, Chung-Feng Jeffrey; Lai, Chun-Yu; Kao, Chih-Hsiang; Chiu, Chin-Hsun

    2018-05-01

    In order to improve the current manual inspection and classification process for polarizing film on production lines, this study proposes a high precision automated inspection and classification system for polarizing film, which is used for recognition and classification of four common defects: dent, foreign material, bright spot, and scratch. First, the median filter is used to remove the impulse noise in the defect image of polarizing film. The random noise in the background is smoothed by the improved anisotropic diffusion, while the edge detail of the defect region is sharpened. Next, the defect image is transformed by Fourier transform to the frequency domain, combined with a Butterworth high pass filter to sharpen the edge detail of the defect region, and brought back by inverse Fourier transform to the spatial domain to complete the image enhancement process. For image segmentation, the edge of the defect region is found by Canny edge detector, and then the complete defect region is obtained by two-stage morphology processing. For defect classification, the feature values, including maximum gray level, eccentricity, the contrast, and homogeneity of gray level co-occurrence matrix (GLCM) extracted from the images, are used as the input of the radial basis function neural network (RBFNN) and back-propagation neural network (BPNN) classifier, 96 defect images are then used as training samples, and 84 defect images are used as testing samples to validate the classification effect. The result shows that the classification accuracy by using RBFNN is 98.9%. Thus, our proposed system can be used by manufacturing companies for a higher yield rate and lower cost. The processing time of one single image is 2.57 seconds, thus meeting the practical application requirement of an industrial production line.

  10. Classification of Hyperspectral Images by SVM Using a Composite Kernel by Employing Spectral, Spatial and Hierarchical Structure Information

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2018-03-01

    Full Text Available In this paper, we introduce a novel classification framework for hyperspectral images (HSIs by jointly employing spectral, spatial, and hierarchical structure information. In this framework, the three types of information are integrated into the SVM classifier in a way of multiple kernels. Specifically, the spectral kernel is constructed through each pixel’s vector value in the original HSI, and the spatial kernel is modeled by using the extended morphological profile method due to its simplicity and effectiveness. To accurately characterize hierarchical structure features, the techniques of Fish-Markov selector (FMS, marker-based hierarchical segmentation (MHSEG and algebraic multigrid (AMG are combined. First, the FMS algorithm is used on the original HSI for feature selection to produce its spectral subset. Then, the multigrid structure of this subset is constructed using the AMG method. Subsequently, the MHSEG algorithm is exploited to obtain a hierarchy consist of a series of segmentation maps. Finally, the hierarchical structure information is represented by using these segmentation maps. The main contributions of this work is to present an effective composite kernel for HSI classification by utilizing spatial structure information in multiple scales. Experiments were conducted on two hyperspectral remote sensing images to validate that the proposed framework can achieve better classification results than several popular kernel-based classification methods in terms of both qualitative and quantitative analysis. Specifically, the proposed classification framework can achieve 13.46–15.61% in average higher than the standard SVM classifier under different training sets in the terms of overall accuracy.

  11. Multi-Functional Sensing for Swarm Robots Using Time Sequence Classification: HoverBot, an Example

    Directory of Open Access Journals (Sweden)

    Markus P. Nemitz

    2018-05-01

    Full Text Available Scaling up robot swarms to collectives of hundreds or even thousands without sacrificing sensing, processing, and locomotion capabilities is a challenging problem. Low-cost robots are potentially scalable, but the majority of existing systems have limited capabilities, and these limitations substantially constrain the type of experiments that could be performed by robotics researchers. Instead of adding functionality by adding more components and therefore increasing the cost, we demonstrate how low-cost hardware can be used beyond its standard functionality. We systematically review 15 swarm robotic systems and analyse their sensing capabilities by applying a general sensor model from the sensing and measurement community. This work is based on the HoverBot system. A HoverBot is a levitating circuit board that manoeuvres by pulling itself towards magnetic anchors that are embedded into the robot arena. We show that HoverBot’s magnetic field readouts from its Hall-effect sensor can be associated to successful movement, robot rotation and collision measurands. We build a time series classifier based on these magnetic field readouts. We modify and apply signal processing techniques to enable the online classification of the time-variant magnetic field measurements on HoverBot’s low-cost microcontroller. We enabled HoverBot with successful movement, rotation, and collision sensing capabilities by utilising its single Hall-effect sensor. We discuss how our classification method could be applied to other sensors to increase a robot’s functionality while retaining its cost.

  12. Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery.

    Science.gov (United States)

    Li, Guiying; Lu, Dengsheng; Moran, Emilio; Hetrick, Scott

    2011-01-01

    This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms - maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based classification (OBC), were explored. The results indicated that a combination of vegetation indices as extra bands into Landsat TM multispectral bands did not improve the overall classification performance, but the combination of textural images was valuable for improving vegetation classification accuracy. In particular, the combination of both vegetation indices and textural images into TM multispectral bands improved overall classification accuracy by 5.6% and kappa coefficient by 6.25%. Comparison of the different classification algorithms indicated that CTA and ANN have poor classification performance in this research, but OBC improved primary forest and pasture classification accuracies. This research indicates that use of textural images or use of OBC are especially valuable for improving the vegetation classes such as upland and liana forest classes having complex stand structures and having relatively large patch sizes.

  13. Water Extraction in High Resolution Remote Sensing Image Based on Hierarchical Spectrum and Shape Features

    International Nuclear Information System (INIS)

    Li, Bangyu; Zhang, Hui; Xu, Fanjiang

    2014-01-01

    This paper addresses the problem of water extraction from high resolution remote sensing images (including R, G, B, and NIR channels), which draws considerable attention in recent years. Previous work on water extraction mainly faced two difficulties. 1) It is difficult to obtain accurate position of water boundary because of using low resolution images. 2) Like all other image based object classification problems, the phenomena of ''different objects same image'' or ''different images same object'' affects the water extraction. Shadow of elevated objects (e.g. buildings, bridges, towers and trees) scattered in the remote sensing image is a typical noise objects for water extraction. In many cases, it is difficult to discriminate between water and shadow in a remote sensing image, especially in the urban region. We propose a water extraction method with two hierarchies: the statistical feature of spectral characteristic based on image segmentation and the shape feature based on shadow removing. In the first hierarchy, the Statistical Region Merging (SRM) algorithm is adopted for image segmentation. The SRM includes two key steps: one is sorting adjacent regions according to a pre-ascertained sort function, and the other one is merging adjacent regions based on a pre-ascertained merging predicate. The sort step is done one time during the whole processing without considering changes caused by merging which may cause imprecise results. Therefore, we modify the SRM with dynamic sort processing, which conducts sorting step repetitively when there is large adjacent region changes after doing merging. To achieve robust segmentation, we apply the merging region with six features (four remote sensing image bands, Normalized Difference Water Index (NDWI), and Normalized Saturation-value Difference Index (NSVDI)). All these features contribute to segment image into region of object. NDWI and NSVDI are discriminate between water and

  14. Convolutional deep belief network with feature encoding for classification of neuroblastoma histological images

    Directory of Open Access Journals (Sweden)

    Soheila Gheisari

    2018-01-01

    Full Text Available Background: Neuroblastoma is the most common extracranial solid tumor in children younger than 5 years old. Optimal management of neuroblastic tumors depends on many factors including histopathological classification. The gold standard for classification of neuroblastoma histological images is visual microscopic assessment. In this study, we propose and evaluate a deep learning approach to classify high-resolution digital images of neuroblastoma histology into five different classes determined by the Shimada classification. Subjects and Methods: We apply a combination of convolutional deep belief network (CDBN with feature encoding algorithm that automatically classifies digital images of neuroblastoma histology into five different classes. We design a three-layer CDBN to extract high-level features from neuroblastoma histological images and combine with a feature encoding model to extract features that are highly discriminative in the classification task. The extracted features are classified into five different classes using a support vector machine classifier. Data: We constructed a dataset of 1043 neuroblastoma histological images derived from Aperio scanner from 125 patients representing different classes of neuroblastoma tumors. Results: The weighted average F-measure of 86.01% was obtained from the selected high-level features, outperforming state-of-the-art methods. Conclusion: The proposed computer-aided classification system, which uses the combination of deep architecture and feature encoding to learn high-level features, is highly effective in the classification of neuroblastoma histological images.

  15. Classification of ADHD children through multimodal Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Dai eDai

    2012-09-01

    Full Text Available Attention deficit/hyperactivity disorder (ADHD is one of the most common diseases in school-age children. To date, the diagnosis of ADHD is mainly subjective and studies of objective diagnostic method are of great importance. Although many efforts have been made recently to investigate the use of structural and functional brain images for the diagnosis purpose, few of them are related to ADHD. In this paper, we introduce an automatic classification framework based on brain imaging features of ADHD patients, and present in detail the feature extraction, feature selection and classifier training methods. The effects of using different features are compared against each other. In addition, we integrate multimodal image features using multi-kernel learning (MKL. The performance of our framework has been validated in the ADHD-200 Global Competition, which is a world-wide classification contest on the ADHD-200 datasets. In this competition, our classification framework using features of resting-state functional connectivity was ranked the 6th out of 21 participants under the competition scoring policy, and performed the best in terms of sensitivity and J-statistic.

  16. PI2GIS: processing image to geographical information systems, a learning tool for QGIS

    Science.gov (United States)

    Correia, R.; Teodoro, A.; Duarte, L.

    2017-10-01

    To perform an accurate interpretation of remote sensing images, it is necessary to extract information using different image processing techniques. Nowadays, it became usual to use image processing plugins to add new capabilities/functionalities integrated in Geographical Information System (GIS) software. The aim of this work was to develop an open source application to automatically process and classify remote sensing images from a set of satellite input data. The application was integrated in a GIS software (QGIS), automating several image processing steps. The use of QGIS for this purpose is justified since it is easy and quick to develop new plugins, using Python language. This plugin is inspired in the Semi-Automatic Classification Plugin (SCP) developed by Luca Congedo. SCP allows the supervised classification of remote sensing images, the calculation of vegetation indices such as NDVI (Normalized Difference Vegetation Index) and EVI (Enhanced Vegetation Index) and other image processing operations. When analysing SCP, it was realized that a set of operations, that are very useful in teaching classes of remote sensing and image processing tasks, were lacking, such as the visualization of histograms, the application of filters, different image corrections, unsupervised classification and several environmental indices computation. The new set of operations included in the PI2GIS plugin can be divided into three groups: pre-processing, processing, and classification procedures. The application was tested consider an image from Landsat 8 OLI from a North area of Portugal.

  17. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  18. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  19. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  20. INTEGRATION OF SPATIAL INFORMATION WITH COLOR FOR CONTENT RETRIEVAL OF REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    Bikesh Kumar Singh

    2010-08-01

    Full Text Available There is rapid increase in image databases of remote sensing images due to image satellites with high resolution, commercial applications of remote sensing & high available bandwidth in last few years. The problem of content-based image retrieval (CBIR of remotely sensed images presents a major challenge not only because of the surprisingly increasing volume of images acquired from a wide range of sensors but also because of the complexity of images themselves. In this paper, a software system for content-based retrieval of remote sensing images using RGB and HSV color spaces is presented. Further, we also compare our results with spatiogram based content retrieval which integrates spatial information along with color histogram. Experimental results show that the integration of spatial information in color improves the image analysis of remote sensing data. In general, retrievals in HSV color space showed better performance than in RGB color space.

  1. Supervised Self-Organizing Classification of Superresolution ISAR Images: An Anechoic Chamber Experiment

    Directory of Open Access Journals (Sweden)

    Radoi Emanuel

    2006-01-01

    Full Text Available The problem of the automatic classification of superresolution ISAR images is addressed in the paper. We describe an anechoic chamber experiment involving ten-scale-reduced aircraft models. The radar images of these targets are reconstructed using MUSIC-2D (multiple signal classification method coupled with two additional processing steps: phase unwrapping and symmetry enhancement. A feature vector is then proposed including Fourier descriptors and moment invariants, which are calculated from the target shape and the scattering center distribution extracted from each reconstructed image. The classification is finally performed by a new self-organizing neural network called SART (supervised ART, which is compared to two standard classifiers, MLP (multilayer perceptron and fuzzy KNN ( nearest neighbors. While the classification accuracy is similar, SART is shown to outperform the two other classifiers in terms of training speed and classification speed, especially for large databases. It is also easier to use since it does not require any input parameter related to its structure.

  2. Automatic Segmentation of Dermoscopic Images by Iterative Classification

    Directory of Open Access Journals (Sweden)

    Maciel Zortea

    2011-01-01

    Full Text Available Accurate detection of the borders of skin lesions is a vital first step for computer aided diagnostic systems. This paper presents a novel automatic approach to segmentation of skin lesions that is particularly suitable for analysis of dermoscopic images. Assumptions about the image acquisition, in particular, the approximate location and color, are used to derive an automatic rule to select small seed regions, likely to correspond to samples of skin and the lesion of interest. The seed regions are used as initial training samples, and the lesion segmentation problem is treated as binary classification problem. An iterative hybrid classification strategy, based on a weighted combination of estimated posteriors of a linear and quadratic classifier, is used to update both the automatically selected training samples and the segmentation, increasing reliability and final accuracy, especially for those challenging images, where the contrast between the background skin and lesion is low.

  3. Integrated ancillary and remote sensing data for land use ...

    African Journals Online (AJOL)

    Full Name

    The application of GMM to remote sensing image classification ... A . The boundary that has a Mahalanobis distance to the centre ... yields the Baye's theorem: ..... bands were extracted using the layer properties tool and visualised in MATLAB ...

  4. Deep multi-scale convolutional neural network for hyperspectral image classification

    Science.gov (United States)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  5. Experimental study on multi-sub-classifier for land cover classification: a case study in Shangri-La, China

    Science.gov (United States)

    Wang, Yan-ying; Wang, Jin-liang; Wang, Ping; Hu, Wen-yin; Su, Shao-hua

    2015-12-01

    High accuracy remote sensed image classification technology is a long-term and continuous pursuit goal of remote sensing applications. In order to evaluate single classification algorithm accuracy, take Landsat TM image as data source, Northwest Yunnan as study area, seven types of land cover classification like Maximum Likelihood Classification has been tested, the results show that: (1)the overall classification accuracy of Maximum Likelihood Classification(MLC), Artificial Neural Network Classification(ANN), Minimum Distance Classification(MinDC) is higher, which is 82.81% and 82.26% and 66.41% respectively; the overall classification accuracy of Parallel Hexahedron Classification(Para), Spectral Information Divergence Classification(SID), Spectral Angle Classification(SAM) is low, which is 37.29%, 38.37, 53.73%, respectively. (2) from each category classification accuracy: although the overall accuracy of the Para is the lowest, it is much higher on grasslands, wetlands, forests, airport land, which is 89.59%, 94.14%, and 89.04%, respectively; the SAM, SID are good at forests classification with higher overall classification accuracy, which is 89.8% and 87.98%, respectively. Although the overall classification accuracy of ANN is very high, the classification accuracy of road, rural residential land and airport land is very low, which is 10.59%, 11% and 11.59% respectively. Other classification methods have their advantages and disadvantages. These results show that, under the same conditions, the same images with different classification methods to classify, there will be a classifier to some features has higher classification accuracy, a classifier to other objects has high classification accuracy, and therefore, we may select multi sub-classifier integration to improve the classification accuracy.

  6. Gaze Embeddings for Zero-Shot Image Classification

    NARCIS (Netherlands)

    Karessli, N.; Akata, Z.; Schiele, B.; Bulling, A.

    2017-01-01

    Zero-shot image classification using auxiliary information, such as attributes describing discriminative object properties, requires time-consuming annotation by domain experts. We instead propose a method that relies on human gaze as auxiliary information, exploiting that even non-expert users have

  7. Automatic plankton image classification combining multiple view features via multiple kernel learning.

    Science.gov (United States)

    Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing

    2017-12-28

    Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system

  8. Analysis and Evaluation of IKONOS Image Fusion Algorithm Based on Land Cover Classification

    Institute of Scientific and Technical Information of China (English)

    Xia; JING; Yan; BAO

    2015-01-01

    Different fusion algorithm has its own advantages and limitations,so it is very difficult to simply evaluate the good points and bad points of the fusion algorithm. Whether an algorithm was selected to fuse object images was also depended upon the sensor types and special research purposes. Firstly,five fusion methods,i. e. IHS,Brovey,PCA,SFIM and Gram-Schmidt,were briefly described in the paper. And then visual judgment and quantitative statistical parameters were used to assess the five algorithms. Finally,in order to determine which one is the best suitable fusion method for land cover classification of IKONOS image,the maximum likelihood classification( MLC) was applied using the above five fusion images. The results showed that the fusion effect of SFIM transform and Gram-Schmidt transform were better than the other three image fusion methods in spatial details improvement and spectral information fidelity,and Gram-Schmidt technique was superior to SFIM transform in the aspect of expressing image details. The classification accuracy of the fused image using Gram-Schmidt and SFIM algorithms was higher than that of the other three image fusion methods,and the overall accuracy was greater than 98%. The IHS-fused image classification accuracy was the lowest,the overall accuracy and kappa coefficient were 83. 14% and 0. 76,respectively. Thus the IKONOS fusion images obtained by the Gram-Schmidt and SFIM were better for improving the land cover classification accuracy.

  9. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng; Xie, Qing; Zhu, Yonghua; Liu, Xingyi; Zhang, Shichao

    2015-01-01

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple

  10. Spatial and Spectral Hybrid Image Classification for Rice Lodging Assessment through UAV Imagery

    Directory of Open Access Journals (Sweden)

    Ming-Der Yang

    2017-06-01

    Full Text Available Rice lodging identification relies on manual in situ assessment and often leads to a compensation dispute in agricultural disaster assessment. Therefore, this study proposes a comprehensive and efficient classification technique for agricultural lands that entails using unmanned aerial vehicle (UAV imagery. In addition to spectral information, digital surface model (DSM and texture information of the images was obtained through image-based modeling and texture analysis. Moreover, single feature probability (SFP values were computed to evaluate the contribution of spectral and spatial hybrid image information to classification accuracy. The SFP results revealed that texture information was beneficial for the classification of rice and water, DSM information was valuable for lodging and tree classification, and the combination of texture and DSM information was helpful in distinguishing between artificial surface and bare land. Furthermore, a decision tree classification model incorporating SFP values yielded optimal results, with an accuracy of 96.17% and a Kappa value of 0.941, compared with that of a maximum likelihood classification model (90.76%. The rice lodging ratio in paddies at the study site was successfully identified, with three paddies being eligible for disaster relief. The study demonstrated that the proposed spatial and spectral hybrid image classification technology is a promising tool for rice lodging assessment.

  11. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    Science.gov (United States)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  12. Land-Use and Land-Cover Mapping Using a Gradable Classification Method

    Directory of Open Access Journals (Sweden)

    Keigo Kitada

    2012-05-01

    Full Text Available Conventional spectral-based classification methods have significant limitations in the digital classification of urban land-use and land-cover classes from high-resolution remotely sensed data because of the lack of consideration given to the spatial properties of images. To recognize the complex distribution of urban features in high-resolution image data, texture information consisting of a group of pixels should be considered. Lacunarity is an index used to characterize different texture appearances. It is often reported that the land-use and land-cover in urban areas can be effectively classified using the lacunarity index with high-resolution images. However, the applicability of the maximum-likelihood approach for hybrid analysis has not been reported. A more effective approach that employs the original spectral data and lacunarity index can be expected to improve the accuracy of the classification. A new classification procedure referred to as “gradable classification method” is proposed in this study. This method improves the classification accuracy in incremental steps. The proposed classification approach integrates several classification maps created from original images and lacunarity maps, which consist of lacnarity values, to create a new classification map. The results of this study confirm the suitability of the gradable classification approach, which produced a higher overall accuracy (68% and kappa coefficient (0.64 than those (65% and 0.60, respectively obtained with the maximum-likelihood approach.

  13. Classification of Maize in Complex Smallholder Farming Systems Using UAV Imagery

    Directory of Open Access Journals (Sweden)

    Ola Hall

    2018-06-01

    Full Text Available Yield estimates and yield gap analysis are important for identifying poor agricultural productivity. Remote sensing holds great promise for measuring yield and thus determining yield gaps. Farming systems in sub-Saharan Africa (SSA are commonly characterized by small field size, intercropping, different crop species with similar phenologies, and sometimes high cloud frequency during the growing season, all of which pose real challenges to remote sensing. Here, an unmanned aerial vehicle (UAV system based on a quadcopter equipped with two consumer-grade cameras was used for the delineation and classification of maize plants on smallholder farms in Ghana. Object-oriented image classification methods were applied to the imagery, combined with measures of image texture and intensity, hue, and saturation (IHS, in order to achieve delineation. It was found that the inclusion of a near-infrared (NIR channel and red–green–blue (RGB spectra, in combination with texture or IHS, increased the classification accuracy for both single and mosaic images to above 94%. Thus, the system proved suitable for delineating and classifying maize using RGB and NIR imagery and calculating the vegetation fraction, an important parameter in producing yield estimates for heterogeneous smallholder farming systems.

  14. Object-oriented classification using quasi-synchronous multispectral images (optical and radar) over agricultural surface

    Science.gov (United States)

    Marais Sicre, Claire; Baup, Frederic; Fieuzal, Remy

    2015-04-01

    In the context of climate change (with consequences on temperature and precipitation patterns), persons involved in agricultural management have the imperative to combine: sufficient productivity (as a response of the increment of the necessary foods) and durability of the resources (in order to restrain waste of water, fertilizer or environmental damages). To this end, a detailed knowledge of land use will improve the management of food and water, while preserving the ecosystems. Among the wide range of available monitoring tools, numerous studies demonstrated the interest of satellite images for agricultural mapping. Recently, the launch of several radar and optical sensors offer new perspectives for the multi-wavelength crop monitoring (Terrasar-X, Radarsat-2, Sentinel-1, Landsat-8…) allowing surface survey whatever the cloud conditions. Previous studies have demonstrated the interest of using multi-temporal approaches for crop classification, requiring several images for suitable classification results. Unfortunately, these approaches are limited (due to the satellite orbit cycle) and require waiting several days, week or month before offering an accurate land use map. The objective of this study is to compare the accuracy of object-oriented classification (random forest algorithm combined with vector layer coming from segmentation) to map winter crop (barley, rapeseed, grasslands and wheat) and soil states (bare soils with different surface roughness) using quasi-synchronous images. Satellite data are composed of multi-frequency and multi-polarization (HH, VV, HV and VH) images acquired near the 14th of April, 2010, over a studied area (90km²) located close to Toulouse in France. This is a region of alluvial plains and hills, which are mostly mixed farming and governed by a temperate climate. Remote sensing images are provided by Formosat-2 (04/18), Radarsat-2 (C-band, 04/15), Terrasar-X (X-band, 04/14) and ALOS (L-band, 04/14). Ground data are collected

  15. Improving settlement type classification of aerial images

    CSIR Research Space (South Africa)

    Mdakane, L

    2014-10-01

    Full Text Available , an automated method can be used to help identify human settlements in a fixed, repeatable and timely manner. The main contribution of this work is to improve generalisation on settlement type classification of aerial imagery. Images acquired at different dates...

  16. Reliable clarity automatic-evaluation method for optical remote sensing images

    Science.gov (United States)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  17. Acquisition of STEM Images by Adaptive Compressive Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash; Stevens, Andrew; Browning, Nigel D.

    2017-07-01

    Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5] are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However

  18. Classification of MR brain images by combination of multi-CNNs for AD diagnosis

    Science.gov (United States)

    Cheng, Danni; Liu, Manhua; Fu, Jianliang; Wang, Yaping

    2017-07-01

    Alzheimer's disease (AD) is an irreversible neurodegenerative disorder with progressive impairment of memory and cognitive functions. Its early diagnosis is crucial for development of future treatment. Magnetic resonance images (MRI) play important role to help understand the brain anatomical changes related to AD. Conventional methods extract the hand-crafted features such as gray matter volumes and cortical thickness and train a classifier to distinguish AD from other groups. Different from these methods, this paper proposes to construct multiple deep 3D convolutional neural networks (3D-CNNs) to learn the various features from local brain images which are combined to make the final classification for AD diagnosis. First, a number of local image patches are extracted from the whole brain image and a 3D-CNN is built upon each local patch to transform the local image into more compact high-level features. Then, the upper convolution and fully connected layers are fine-tuned to combine the multiple 3D-CNNs for image classification. The proposed method can automatically learn the generic features from imaging data for classification. Our method is evaluated using T1-weighted structural MR brain images on 428 subjects including 199 AD patients and 229 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 87.15% and an AUC (area under the ROC curve) of 92.26% for AD classification, demonstrating the promising classification performances.

  19. Remote Sensing Image Registration with Line Segments and Their Intersections

    Directory of Open Access Journals (Sweden)

    Chengjin Lyu

    2017-05-01

    Full Text Available Image registration is a basic but essential step for remote sensing image processing, and finding stable features in multitemporal images is one of the most considerable challenges in the field. The main shape contours of artificial objects (e.g., roads, buildings, farmlands, and airports can be generally described as a group of line segments, which are stable features, even in images with evident background changes (e.g., images taken before and after a disaster. In this study, a registration method that uses line segments and their intersections is proposed for multitemporal remote sensing images. First, line segments are extracted in image pyramids to unify the scales of the reference image and the test image. Then, a line descriptor based on the gradient distribution of local areas is constructed, and the segments are matched in image pyramids. Lastly, triplets of intersections of matching lines are selected to estimate affine transformation between two images. Additional corresponding intersections are provided based on the estimated transformation, and an iterative process is adopted to remove outliers. The performance of the proposed method is tested on a variety of optical remote sensing image pairs, including synthetic and real data. Compared with existing methods, our method can provide more accurate registration results, even in images with significant background changes.

  20. Combined Kernel-Based BDT-SMO Classification of Hyperspectral Fused Images

    Directory of Open Access Journals (Sweden)

    Fenghua Huang

    2014-01-01

    Full Text Available To solve the poor generalization and flexibility problems that single kernel SVM classifiers have while classifying combined spectral and spatial features, this paper proposed a solution to improve the classification accuracy and efficiency of hyperspectral fused images: (1 different radial basis kernel functions (RBFs are employed for spectral and textural features, and a new combined radial basis kernel function (CRBF is proposed by combining them in a weighted manner; (2 the binary decision tree-based multiclass SMO (BDT-SMO is used in the classification of hyperspectral fused images; (3 experiments are carried out, where the single radial basis function- (SRBF- based BDT-SMO classifier and the CRBF-based BDT-SMO classifier are used, respectively, to classify the land usages of hyperspectral fused images, and genetic algorithms (GA are used to optimize the kernel parameters of the classifiers. The results show that, compared with SRBF, CRBF-based BDT-SMO classifiers display greater classification accuracy and efficiency.

  1. A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images

    CSIR Research Space (South Africa)

    Salmon

    2012-07-01

    Full Text Available stream_source_info Salmon1_2012_ABSTRACT ONLY.pdf.txt stream_content_type text/plain stream_size 1654 Content-Encoding ISO-8859-1 stream_name Salmon1_2012_ABSTRACT ONLY.pdf.txt Content-Type text/plain; charset=ISO-8859...-1 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22-27 July 2012 A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images yzB.P. Salmon, yz...

  2. Use of UAV-Borne Spectrometer for Land Cover Classification

    Directory of Open Access Journals (Sweden)

    Sowmya Natesan

    2018-04-01

    Full Text Available Unmanned aerial vehicles (UAV are being used for low altitude remote sensing for thematic land classification using visible light and multi-spectral sensors. The objective of this work was to investigate the use of UAV equipped with a compact spectrometer for land cover classification. The UAV platform used was a DJI Flamewheel F550 hexacopter equipped with GPS and Inertial Measurement Unit (IMU navigation sensors, and a Raspberry Pi processor and camera module. The spectrometer used was the FLAME-NIR, a near-infrared spectrometer for hyperspectral measurements. RGB images and spectrometer data were captured simultaneously. As spectrometer data do not provide continuous terrain coverage, the locations of their ground elliptical footprints were determined from the bundle adjustment solution of the captured images. For each of the spectrometer ground ellipses, the land cover signature at the footprint location was determined to enable the characterization, identification, and classification of land cover elements. To attain a continuous land cover classification map, spatial interpolation was carried out from the irregularly distributed labeled spectrometer points. The accuracy of the classification was assessed using spatial intersection with the object-based image classification performed using the RGB images. Results show that in homogeneous land cover, like water, the accuracy of classification is 78% and in mixed classes, like grass, trees and manmade features, the average accuracy is 50%, thus, indicating the contribution of hyperspectral measurements of low altitude UAV-borne spectrometers to improve land cover classification.

  3. Three-dimensional imaging of acetabular dysplasia: diagnostic value and impact on surgical type classification

    Energy Technology Data Exchange (ETDEWEB)

    Smet, Maria-Helena E-mail: marleen.smet@uz.kuleuven.ac.be; Marchal, Guy J.; Baert, Albert L.; Hoe, Lieven van; Cleynenbreugel, Johan van; Daniels, Hans; Molenaers, Guy; Moens, Pierre; Fabry, Guy

    2000-04-01

    Objective: To investigate the diagnostic value and the impact on surgical type classification of three-dimensional (3D) images for pre-surgical evaluation of dysplastic hips. Materials and methods: Three children with a different surgical type of hip dysplasia were investigated with helical computed tomography. For each patient, two-dimensional (2D) images, 3D, and a stereolithographic model of the dysplastic hip were generated. In two separate sessions, 40 medical observers independently analyzed the 2D images (session 1), the 2D and 3D images (session 2), and tried to identify the corresponding stereolithographic hip model. The influence of both image presentation (2D versus 3D images) and observer (degree of experience, radiologist versus orthopedic surgeon) were statistically analyzed. The SL model choice reflected the impact on surgical type classification. Results: Image presentation was a significant factor whereas the individual observer was not. Three-dimensional images scored significantly better than 2D images (P=0.0003). Three-dimensional imaging increased the correct surgical type classification by 35%. Conclusion: Three-dimensional images significantly improve the pre-surgical diagnostic assessment and surgical type classification of dysplastic hips.

  4. Three-dimensional imaging of acetabular dysplasia: diagnostic value and impact on surgical type classification

    International Nuclear Information System (INIS)

    Smet, Maria-Helena; Marchal, Guy J.; Baert, Albert L.; Hoe, Lieven van; Cleynenbreugel, Johan van; Daniels, Hans; Molenaers, Guy; Moens, Pierre; Fabry, Guy

    2000-01-01

    Objective: To investigate the diagnostic value and the impact on surgical type classification of three-dimensional (3D) images for pre-surgical evaluation of dysplastic hips. Materials and methods: Three children with a different surgical type of hip dysplasia were investigated with helical computed tomography. For each patient, two-dimensional (2D) images, 3D, and a stereolithographic model of the dysplastic hip were generated. In two separate sessions, 40 medical observers independently analyzed the 2D images (session 1), the 2D and 3D images (session 2), and tried to identify the corresponding stereolithographic hip model. The influence of both image presentation (2D versus 3D images) and observer (degree of experience, radiologist versus orthopedic surgeon) were statistically analyzed. The SL model choice reflected the impact on surgical type classification. Results: Image presentation was a significant factor whereas the individual observer was not. Three-dimensional images scored significantly better than 2D images (P=0.0003). Three-dimensional imaging increased the correct surgical type classification by 35%. Conclusion: Three-dimensional images significantly improve the pre-surgical diagnostic assessment and surgical type classification of dysplastic hips

  5. Development of a Regional Habitat Classification Scheme for the ...

    African Journals Online (AJOL)

    development, image processing techniques and field survey methods are outlined. Habitat classification, and regional-scale comparisons of relative habitat composition are described. The study demonstrates the use of remote sensing data to construct digital habitat maps for the comparison of regional habitat coverage, ...

  6. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    Science.gov (United States)

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  7. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Directory of Open Access Journals (Sweden)

    Lei Shi

    2018-01-01

    Full Text Available In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA and tabu search (TS is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy.

  8. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Science.gov (United States)

    Shi, Lei; Wan, Youchuan; Gao, Xianjun

    2018-01-01

    In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721

  9. Hyperspectral Image Classification Using Kernel Fukunaga-Koontz Transform

    Directory of Open Access Journals (Sweden)

    Semih Dinç

    2013-01-01

    images. In experiment section, the improved performance of HSI classification technique, K-FKT, has been tested comparing other methods such as the classical FKT and three types of support vector machines (SVMs.

  10. Temporal Data Fusion Approaches to Remote Sensing-Based Wetland Classification

    Science.gov (United States)

    Montgomery, Joshua S. M.

    This thesis investigates the ecology of wetlands and associated classification in prairie and boreal environments of Alberta, Canada, using remote sensing technology to enhance classification of wetlands in the province. Objectives of the thesis are divided into two case studies, 1) examining how satellite borne Synthetic Aperture Radar (SAR), optical (RapidEye & SPOT) can be used to evaluate surface water trends in a prairie pothole environment (Shepard Slough); and 2) investigating a data fusion methodology combining SAR, optical and Lidar data to characterize wetland vegetation and surface water attributes in a boreal environment (Utikuma Regional Study Area (URSA)). Surface water extent and hydroperiod products were derived from SAR data, and validated using optical imagery with high accuracies (76-97% overall) for both case studies. High resolution Lidar Digital Elevation Models (DEM), Digital Surface Models (DSM), and Canopy Height Model (CHM) products provided the means for data fusion to extract riparian vegetation communities and surface water; producing model accuracies of (R2 0.90) for URSA, and RMSE of 0.2m to 0.7m at Shepard Slough when compared to field and optical validation data. Integration of Alberta and Canadian wetland classifications systems used to classify and determine economic value of wetlands into the methodology produced thematic maps relevant for policy and decision makers for potential wetland monitoring and policy development.

  11. Autonomy of image and use of single or multiple sense modalities in original verbal image production.

    Science.gov (United States)

    Khatena, J

    1978-06-01

    The use of a single or of multiple sense modalities in the production of original verbal images as related to autonomy of imagery was explored. 72 college adults were administered Onomatopoeia and Images and the Gordon Test of Visual Imagery Control. A modified scoring procedure for the Gordon scale differentiated imagers who were moderate or low in autonomy. The two groups produced original verbal images using multiple sense modalities more frequently than a single modality.

  12. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    Science.gov (United States)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  13. An Orthogonal Learning Differential Evolution Algorithm for Remote Sensing Image Registration

    Directory of Open Access Journals (Sweden)

    Wenping Ma

    2014-01-01

    Full Text Available We introduce an area-based method for remote sensing image registration. We use orthogonal learning differential evolution algorithm to optimize the similarity metric between the reference image and the target image. Many local and global methods have been used to achieve the optimal similarity metric in the last few years. Because remote sensing images are usually influenced by large distortions and high noise, local methods will fail in some cases. For this reason, global methods are often required. The orthogonal learning (OL strategy is efficient when searching in complex problem spaces. In addition, it can discover more useful information via orthogonal experimental design (OED. Differential evolution (DE is a heuristic algorithm. It has shown to be efficient in solving the remote sensing image registration problem. So orthogonal learning differential evolution algorithm (OLDE is efficient for many optimization problems. The OLDE method uses the OL strategy to guide the DE algorithm to discover more useful information. Experiments show that the OLDE method is more robust and efficient for registering remote sensing images.

  14. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    Science.gov (United States)

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  15. Interactive classification and content-based retrieval of tissue images

    Science.gov (United States)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  16. Crop stress detection and classification using hyperspectral remote sensing

    Science.gov (United States)

    Irby, Jon Trenton

    Agricultural production has observed many changes in technology over the last 20 years. Producers are able to utilize technologies such as site-specific applicators and remotely sensed data to assist with decision making for best management practices which can improve crop production and provide protection to the environment. It is known that plant stress can interfere with photosynthetic reactions within the plant and/or the physical structure of the plant. Common types of stress associated with agricultural crops include herbicide induced stress, nutrient stress, and drought stress from lack of water. Herbicide induced crop stress is not a new problem. However, with increased acreage being planting in varieties/hybrids that contain herbicide resistant traits, herbicide injury to non-target crops will continue to be problematic for producers. With rapid adoption of herbicide-tolerant cropping systems, it is likely that herbicide induced stress will continue to be a major concern. To date, commercially available herbicide-tolerant varieties/hybrids contain traits which allow herbicides like glyphosate and glufosinate-ammonium to be applied as a broadcast application during the growing season. Both glyphosate and glufosinate-ammonium are broad spectrum herbicides which have activity on a large number of plant species, including major crops like non-transgenic soybean, corn, and cotton. Therefore, it is possible for crop stress from herbicide applications to occur in neighboring fields that contain susceptible crop varieties/hybrids. Nutrient and moisture stress as well as stress caused by herbicide applications can interact to influence yields in agricultural fields. If remotely sensed data can be used to accurately identify specific levels of crop stress, it is possible that producers can use this information to better assist them in crop management to maximize yields and protect their investments. This research was conducted to evaluate classification of specific

  17. Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA. Part 2: Novel system Architecture, Information/Knowledge Representation, Algorithm Design and Implementation

    Directory of Open Access Journals (Sweden)

    Luigi Boschetti

    2012-09-01

    Full Text Available According to literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA systems and three-stage iterative geographic object-oriented image analysis (GEOOIA systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the Quality Indexes of Operativeness (OQIs of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO guidelines, this methodological work is split into two parts. Based on an original multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT analysis of the GEOBIA/GEOOIA approaches, the first part of this work promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS image understanding system (RS-IUS, from sub-symbolic statistical model-based (inductive image segmentation to symbolic physical model-based (deductive image preliminary classification capable of accomplishing image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the present second part of this work, a novel hybrid (combined deductive and inductive RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a computational theory (system design, (b information/knowledge representation, (c algorithm design and (d implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time, multi-sensor, multi-resolution, application-independent Satellite Image Automatic Mapper™ (SIAM™ is selected from existing literature. To the best of these authors’ knowledge, this is the first time a symbolic syntactic inference system, like SIAM™, is made available to the RS community for operational use in a RS-IUS pre-attentive vision first stage

  18. Automated retinal vessel type classification in color fundus images

    Science.gov (United States)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  19. Biomedical Imaging Modality Classification Using Combined Visual Features and Textual Terms

    Directory of Open Access Journals (Sweden)

    Xian-Hua Han

    2011-01-01

    extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  20. The Royal College of Radiologists Breast Group breast imaging classification

    International Nuclear Information System (INIS)

    Maxwell, A.J.; Ridley, N.T.; Rubin, G.; Wallis, M.G.; Gilbert, F.J.; Michell, M.J.

    2009-01-01

    Standardisation of the classification of breast imaging reports will improve communication between the referrer and the radiologist and avoid ambiguity, which may otherwise lead to mismanagement of patients. Following wide consultation, Royal College of Radiologists Breast Group has produced a scoring system for the classification of breast imaging. This will facilitate audit and the development of nationally agreed standards for the investigation of women with breast disease. This five-point system is as follows: 1, normal; 2, benign findings; 3, indeterminate/probably benign findings; 4, findings suspicious of malignancy; 5, findings highly suspicious of malignancy. It is recommended that this be used in the reporting of all breast imaging examinations in the UK.

  1. How automated image analysis techniques help scientists in species identification and classification?

    Science.gov (United States)

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  2. Utility of multispectral imaging for nuclear classification of routine clinical histopathology imagery

    Directory of Open Access Journals (Sweden)

    Harvey Neal R

    2007-07-01

    Full Text Available Abstract Background We present an analysis of the utility of multispectral versus standard RGB imagery for routine H&E stained histopathology images, in particular for pixel-level classification of nuclei. Our multispectral imagery has 29 spectral bands, spaced 10 nm within the visual range of 420–700 nm. It has been hypothesized that the additional spectral bands contain further information useful for classification as compared to the 3 standard bands of RGB imagery. We present analyses of our data designed to test this hypothesis. Results For classification using all available image bands, we find the best performance (equal tradeoff between detection rate and false alarm rate is obtained from either the multispectral or our "ccd" RGB imagery, with an overall increase in performance of 0.79% compared to the next best performing image type. For classification using single image bands, the single best multispectral band (in the red portion of the spectrum gave a performance increase of 0.57%, compared to performance of the single best RGB band (red. Additionally, red bands had the highest coefficients/preference in our classifiers. Principal components analysis of the multispectral imagery indicates only two significant image bands, which is not surprising given the presence of two stains. Conclusion Our results indicate that multispectral imagery for routine H&E stained histopathology provides minimal additional spectral information for a pixel-level nuclear classification task than would standard RGB imagery.

  3. Remote Sensing of Ecology, Biodiversity and Conservation: A Review from the Perspective of Remote Sensing Specialists

    Directory of Open Access Journals (Sweden)

    Marc Cattet

    2010-11-01

    Full Text Available Remote sensing, the science of obtaining information via noncontact recording, has swept the fields of ecology, biodiversity and conservation (EBC. Several quality review papers have contributed to this field. However, these papers often discuss the issues from the standpoint of an ecologist or a biodiversity specialist. This review focuses on the spaceborne remote sensing of EBC from the perspective of remote sensing specialists, i.e., it is organized in the context of state-of-the-art remote sensing technology, including instruments and techniques. Herein, the instruments to be discussed consist of high spatial resolution, hyperspectral, thermal infrared, small-satellite constellation, and LIDAR sensors; and the techniques refer to image classification, vegetation index (VI, inversion algorithm, data fusion, and the integration of remote sensing (RS and geographic information system (GIS.

  4. Remote sensing of ecology, biodiversity and conservation: a review from the perspective of remote sensing specialists.

    Science.gov (United States)

    Wang, Kai; Franklin, Steven E; Guo, Xulin; Cattet, Marc

    2010-01-01

    Remote sensing, the science of obtaining information via noncontact recording, has swept the fields of ecology, biodiversity and conservation (EBC). Several quality review papers have contributed to this field. However, these papers often discuss the issues from the standpoint of an ecologist or a biodiversity specialist. This review focuses on the spaceborne remote sensing of EBC from the perspective of remote sensing specialists, i.e., it is organized in the context of state-of-the-art remote sensing technology, including instruments and techniques. Herein, the instruments to be discussed consist of high spatial resolution, hyperspectral, thermal infrared, small-satellite constellation, and LIDAR sensors; and the techniques refer to image classification, vegetation index (VI), inversion algorithm, data fusion, and the integration of remote sensing (RS) and geographic information system (GIS).

  5. Damage classification of pipelines under water flow operation using multi-mode actuated sensing technology

    International Nuclear Information System (INIS)

    Lee, Changgil; Park, Seunghee

    2011-01-01

    In a structure, several types of damage can occur, ranging from micro-cracking to corrosion or loose bolts. This makes identifying the damage difficult with a single mode of sensing. Therefore, a multi-mode actuated sensing system is proposed based on a self-sensing circuit using a piezoelectric sensor. In self-sensing-based multi-mode actuated sensing, one mode provides a wide frequency-band structural response from the self-sensed impedance measurement and the other mode provides a specific frequency-induced structural wavelet response from the self-sensed guided wave measurement. In this experimental study, a pipeline system under water flow operation was examined to verify the effectiveness and robustness of the proposed structural health monitoring approach. Different types of structural damage were inflicted artificially on the pipeline system. To classify the multiple types of structural damage, supervised learning-based statistical pattern recognition was implemented by composing a three-dimensional space using the damage indices extracted from the impedance and guided wave features as well as temperature variations. For a more systematic damage classification, several control parameters were optimized to determine an optimal decision boundary for the supervised learning-based pattern recognition. Further research issues are also discussed for real-world implementations of the proposed approach

  6. Differential laser-induced perturbation spectroscopy and fluorescence imaging for biological and materials sensing

    Science.gov (United States)

    Burton, Dallas Jonathan

    The field of laser-based diagnostics has been a topic of research in various fields, more specifically for applications in environmental studies, military defense technologies, and medicine, among many others. In this dissertation, a novel laser-based optical diagnostic method, differential laser-induced perturbation spectroscopy (DLIPS), has been implemented in a spectroscopy mode and expanded into an imaging mode in combination with fluorescence techniques. The DLIPS method takes advantage of deep ultraviolet (UV) laser perturbation at sub-ablative energy fluences to photochemically cleave bonds and alter fluorescence signal response before and after perturbation. The resulting difference spectrum or differential image adds more information about the target specimen, and can be used in combination with traditional fluorescence techniques for detection of certain materials, characterization of many materials and biological specimen, and diagnosis of various human skin conditions. The differential aspect allows for mitigation of patient or sample variation, and has the potential to develop into a powerful, noninvasive optical sensing tool. The studies in this dissertation encompass efforts to continue the fundamental research on DLIPS including expansion of the method to an imaging mode. Five primary studies have been carried out and presented. These include the use of DLIPS in a spectroscopy mode for analysis of nitrogen-based explosives on various substrates, classification of Caribbean fruit flies versus Caribbean fruit flies that have been irradiated with gamma rays, and diagnosis of human skin cancer lesions. The nitrogen-based explosives and Caribbean fruit flies have been analyzed with the DLIPS scheme using the imaging modality, providing complementary information to the spectroscopic scheme. In each study, a comparison between absolute fluorescence signals and DLIPS responses showed that DLIPS statistically outperformed traditional fluorescence techniques

  7. Texture classification using autoregressive filtering

    Science.gov (United States)

    Lawton, W. M.; Lee, M.

    1984-01-01

    A general theory of image texture models is proposed and its applicability to the problem of scene segmentation using texture classification is discussed. An algorithm, based on half-plane autoregressive filtering, which optimally utilizes second order statistics to discriminate between texture classes represented by arbitrary wide sense stationary random fields is described. Empirical results of applying this algorithm to natural and sysnthesized scenes are presented and future research is outlined.

  8. High efficient optical remote sensing images acquisition for nano-satellite: reconstruction algorithms

    Science.gov (United States)

    Liu, Yang; Li, Feng; Xin, Lei; Fu, Jie; Huang, Puming

    2017-10-01

    Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.

  9. Research on Remote Sensing Image Template Processing Based on Global Subdivision Theory

    OpenAIRE

    Xiong Delan; Du Genyuan

    2013-01-01

    Aiming at the questions of vast data, complex operation, and time consuming processing for remote sensing image, subdivision template was proposed based on global subdivision theory, which can set up high level of abstraction and generalization for remote sensing image. The paper emphatically discussed the model and structure of subdivision template, and put forward some new ideas for remote sensing image template processing, key technology and quickly applied demonstration. The research has ...

  10. Restoration of color in a remote sensing image and its quality evaluation

    Science.gov (United States)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  11. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    Science.gov (United States)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  12. a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data

    Science.gov (United States)

    Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.

    2018-04-01

    Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.

  13. Pixel Classification of SAR ice images using ANFIS-PSO Classifier

    Directory of Open Access Journals (Sweden)

    G. Vasumathi

    2016-12-01

    Full Text Available Synthetic Aperture Radar (SAR is playing a vital role in taking extremely high resolution radar images. It is greatly used to monitor the ice covered ocean regions. Sea monitoring is important for various purposes which includes global climate systems and ship navigation. Classification on the ice infested area gives important features which will be further useful for various monitoring process around the ice regions. Main objective of this paper is to classify the SAR ice image that helps in identifying the regions around the ice infested areas. In this paper three stages are considered in classification of SAR ice images. It starts with preprocessing in which the speckled SAR ice images are denoised using various speckle removal filters; comparison is made on all these filters to find the best filter in speckle removal. Second stage includes segmentation in which different regions are segmented using K-means and watershed segmentation algorithms; comparison is made between these two algorithms to find the best in segmenting SAR ice images. The last stage includes pixel based classification which identifies and classifies the segmented regions using various supervised learning classifiers. The algorithms includes Back propagation neural networks (BPN, Fuzzy Classifier, Adaptive Neuro Fuzzy Inference Classifier (ANFIS classifier and proposed ANFIS with Particle Swarm Optimization (PSO classifier; comparison is made on all these classifiers to propose which classifier is best suitable for classifying the SAR ice image. Various evaluation metrics are performed separately at all these three stages.

  14. Computational Ghost Imaging for Remote Sensing

    Science.gov (United States)

    Erkmen, Baris I.

    2012-01-01

    This work relates to the generic problem of remote active imaging; that is, a source illuminates a target of interest and a receiver collects the scattered light off the target to obtain an image. Conventional imaging systems consist of an imaging lens and a high-resolution detector array [e.g., a CCD (charge coupled device) array] to register the image. However, conventional imaging systems for remote sensing require high-quality optics and need to support large detector arrays and associated electronics. This results in suboptimal size, weight, and power consumption. Computational ghost imaging (CGI) is a computational alternative to this traditional imaging concept that has a very simple receiver structure. In CGI, the transmitter illuminates the target with a modulated light source. A single-pixel (bucket) detector collects the scattered light. Then, via computation (i.e., postprocessing), the receiver can reconstruct the image using the knowledge of the modulation that was projected onto the target by the transmitter. This way, one can construct a very simple receiver that, in principle, requires no lens to image a target. Ghost imaging is a transverse imaging modality that has been receiving much attention owing to a rich interconnection of novel physical characteristics and novel signal processing algorithms suitable for active computational imaging. The original ghost imaging experiments consisted of two correlated optical beams traversing distinct paths and impinging on two spatially-separated photodetectors: one beam interacts with the target and then illuminates on a single-pixel (bucket) detector that provides no spatial resolution, whereas the other beam traverses an independent path and impinges on a high-resolution camera without any interaction with the target. The term ghost imaging was coined soon after the initial experiments were reported, to emphasize the fact that by cross-correlating two photocurrents, one generates an image of the target. In

  15. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  16. Dimension Reduction Aided Hyperspectral Image Classification with a Small-sized Training Dataset: Experimental Comparisons

    Directory of Open Access Journals (Sweden)

    Jinya Su

    2017-11-01

    Full Text Available Hyperspectral images (HSI provide rich information which may not be captured by other sensing technologies and therefore gradually find a wide range of applications. However, they also generate a large amount of irrelevant or redundant data for a specific task. This causes a number of issues including significantly increased computation time, complexity and scale of prediction models mapping the data to semantics (e.g., classification, and the need of a large amount of labelled data for training. Particularly, it is generally difficult and expensive for experts to acquire sufficient training samples in many applications. This paper addresses these issues by exploring a number of classical dimension reduction algorithms in machine learning communities for HSI classification. To reduce the size of training dataset, feature selection (e.g., mutual information, minimal redundancy maximal relevance and feature extraction (e.g., Principal Component Analysis (PCA, Kernel PCA are adopted to augment a baseline classification method, Support Vector Machine (SVM. The proposed algorithms are evaluated using a real HSI dataset. It is shown that PCA yields the most promising performance in reducing the number of features or spectral bands. It is observed that while significantly reducing the computational complexity, the proposed method can achieve better classification results over the classic SVM on a small training dataset, which makes it suitable for real-time applications or when only limited training data are available. Furthermore, it can also achieve performances similar to the classic SVM on large datasets but with much less computing time.

  17. Hyperspectral Image Classification Using Discriminative Dictionary Learning

    International Nuclear Information System (INIS)

    Zongze, Y; Hao, S; Kefeng, J; Huanxin, Z

    2014-01-01

    The hyperspectral image (HSI) processing community has witnessed a surge of papers focusing on the utilization of sparse prior for effective HSI classification. In sparse representation based HSI classification, there are two phases: sparse coding with an over-complete dictionary and classification. In this paper, we first apply a novel fisher discriminative dictionary learning method, which capture the relative difference in different classes. The competitive selection strategy ensures that atoms in the resulting over-complete dictionary are the most discriminative. Secondly, motivated by the assumption that spatially adjacent samples are statistically related and even belong to the same materials (same class), we propose a majority voting scheme incorporating contextual information to predict the category label. Experiment results show that the proposed method can effectively strengthen relative discrimination of the constructed dictionary, and incorporating with the majority voting scheme achieve generally an improved prediction performance

  18. Learning scale-variant and scale-invariant features for deep image classification

    NARCIS (Netherlands)

    van Noord, Nanne; Postma, Eric

    Convolutional Neural Networks (CNNs) require large image corpora to be trained on classification tasks. The variation in image resolutions, sizes of objects and patterns depicted, and image scales, hampers CNN training and performance, because the task-relevant information varies over spatial

  19. Evaluation of an Airborne Remote Sensing Platform Consisting of Two Consumer-Grade Cameras for Crop Identification

    Directory of Open Access Journals (Sweden)

    Jian Zhang

    2016-03-01

    Full Text Available Remote sensing systems based on consumer-grade cameras have been increasingly used in scientific research and remote sensing applications because of their low cost and ease of use. However, the performance of consumer-grade cameras for practical applications has not been well documented in related studies. The objective of this research was to apply three commonly-used classification methods (unsupervised, supervised, and object-based to three-band imagery with RGB (red, green, and blue bands and four-band imagery with RGB and near-infrared (NIR bands to evaluate the performance of a dual-camera imaging system for crop identification. Airborne images were acquired from a cropping area in Texas and mosaicked and georeferenced. The mosaicked imagery was classified using the three classification methods to assess the usefulness of NIR imagery for crop identification and to evaluate performance differences between the object-based and pixel-based methods. Image classification and accuracy assessment showed that the additional NIR band imagery improved crop classification accuracy over the RGB imagery and that the object-based method achieved better results with additional non-spectral image features. The results from this study indicate that the airborne imaging system based on two consumer-grade cameras used in this study can be useful for crop identification and other agricultural applications.

  20. Land Cover Classification from Multispectral Data Using Computational Intelligence Tools: A Comparative Study

    Directory of Open Access Journals (Sweden)

    André Mora

    2017-11-01

    Full Text Available This article discusses how computational intelligence techniques are applied to fuse spectral images into a higher level image of land cover distribution for remote sensing, specifically for satellite image classification. We compare a fuzzy-inference method with two other computational intelligence methods, decision trees and neural networks, using a case study of land cover classification from satellite images. Further, an unsupervised approach based on k-means clustering has been also taken into consideration for comparison. The fuzzy-inference method includes training the classifier with a fuzzy-fusion technique and then performing land cover classification using reinforcement aggregation operators. To assess the robustness of the four methods, a comparative study including three years of land cover maps for the district of Mandimba, Niassa province, Mozambique, was undertaken. Our results show that the fuzzy-fusion method performs similarly to decision trees, achieving reliable classifications; neural networks suffer from overfitting; while k-means clustering constitutes a promising technique to identify land cover types from unknown areas.

  1. Quality Evaluation of Land-Cover Classification Using Convolutional Neural Network

    Science.gov (United States)

    Dang, Y.; Zhang, J.; Zhao, Y.; Luo, F.; Ma, W.; Yu, F.

    2018-04-01

    Land-cover classification is one of the most important products of earth observation, which focuses mainly on profiling the physical characters of the land surface with temporal and distribution attributes and contains the information of both natural and man-made coverage elements, such as vegetation, soil, glaciers, rivers, lakes, marsh wetlands and various man-made structures. In recent years, the amount of high-resolution remote sensing data has increased sharply. Accordingly, the volume of land-cover classification products increases, as well as the need to evaluate such frequently updated products that is a big challenge. Conventionally, the automatic quality evaluation of land-cover classification is made through pixel-based classifying algorithms, which lead to a much trickier task and consequently hard to keep peace with the required updating frequency. In this paper, we propose a novel quality evaluation approach for evaluating the land-cover classification by a scene classification method Convolutional Neural Network (CNN) model. By learning from remote sensing data, those randomly generated kernels that serve as filter matrixes evolved to some operators that has similar functions to man-crafted operators, like Sobel operator or Canny operator, and there are other kernels learned by the CNN model that are much more complex and can't be understood as existing filters. The method using CNN approach as the core algorithm serves quality-evaluation tasks well since it calculates a bunch of outputs which directly represent the image's membership grade to certain classes. An automatic quality evaluation approach for the land-cover DLG-DOM coupling data (DLG for Digital Line Graphic, DOM for Digital Orthophoto Map) will be introduced in this paper. The CNN model as an robustness method for image evaluation, then brought out the idea of an automatic quality evaluation approach for land-cover classification. Based on this experiment, new ideas of quality evaluation

  2. Hyperspectral Image Classification Based on the Combination of Spatial-spectral Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    YANG Zhaoxia

    2015-07-01

    Full Text Available In order to avoid the problem of being over-dependent on high-dimensional spectral feature in the traditional hyperspectral image classification, a novel approach based on the combination of spatial-spectral feature and sparse representation is proposed in this paper. Firstly, we extract the spatial-spectral feature by reorganizing the local image patch with the first d principal components(PCs into a vector representation, followed by a sorting scheme to make the vector invariant to local image rotation. Secondly, we learn the dictionary through a supervised method, and use it to code the features from test samples afterwards. Finally, we embed the resulting sparse feature coding into the support vector machine(SVM for hyperspectral image classification. Experiments using three hyperspectral data show that the proposed method can effectively improve the classification accuracy comparing with traditional classification methods.

  3. Remote sensing image ship target detection method based on visual attention model

    Science.gov (United States)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  4. Color-Image Classification Using MRFs for an Outdoor Mobile Robot

    Directory of Open Access Journals (Sweden)

    Moises Alencastre-Miranda

    2005-02-01

    Full Text Available In this paper, we suggest to use color-image classification (in several phases using Markov Random Fields (MRFs in order to understand natural images from outdoor environment's scenes for a mobile robot. We skip preprocessing phase having same results and better performance. In segmentation phase, we implement a color segmentation method considering I3 color space measure average in little image's cells obtained from a single split step. In classification phase, a MRF was used to identify regions as one of three selected classes; here, we consider at the same time the intrinsic color features of the image and the neighborhood system between image's cells. Finally, we use region growing and contextual information to correct misclassification errors. We have implemented and tested those phases with several images taken at our campus' gardens. We include some results in off-line processing mode and in on-line execution mode on an outdoor mobile robot. The vision system has been used for reactive exploration in an outdoor environment.

  5. AUTOMATIC GLOBAL REGISTRATION BETWEEN AIRBORNE LIDAR DATA AND REMOTE SENSING IMAGE BASED ON STRAIGHT LINE FEATURES

    Directory of Open Access Journals (Sweden)

    Z. Q. Liu

    2018-04-01

    Full Text Available An automatic global registration approach for point clouds and remote sensing image based on straight line features is proposed which is insensitive to rotational and scale transformation. First, the building ridge lines and contour lines in point clouds are automatically detected as registration primitives by integrating region growth and topology identification. Second, the collinear condition equation is selected as registration transformation function which is based on rotation matrix described by unit quaternion. The similarity measure is established according to the distance between the corresponding straight line features from point clouds and the image in the same reference coordinate system. Finally, an iterative Hough transform is adopted to simultaneously estimate the parameters and obtain correspondence between registration primitives. Experimental results prove the proposed method is valid and the spectral information is useful for the following classification processing.

  6. Semiconductor Laser Multi-Spectral Sensing and Imaging

    Directory of Open Access Journals (Sweden)

    Han Q. Le

    2010-01-01

    Full Text Available Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO. These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers.

  7. Semiconductor laser multi-spectral sensing and imaging.

    Science.gov (United States)

    Le, Han Q; Wang, Yang

    2010-01-01

    Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers.

  8. A coarse-to-fine approach for medical hyperspectral image classification with sparse representation

    Science.gov (United States)

    Chang, Lan; Zhang, Mengmeng; Li, Wei

    2017-10-01

    A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.

  9. Land Cover Classification Using ALOS Imagery For Penang, Malaysia

    International Nuclear Information System (INIS)

    Sim, C K; Abdullah, K; MatJafri, M Z; Lim, H S

    2014-01-01

    This paper presents the potential of integrating optical and radar remote sensing data to improve automatic land cover mapping. The analysis involved standard image processing, and consists of spectral signature extraction and application of a statistical decision rule to identify land cover categories. A maximum likelihood classifier is utilized to determine different land cover categories. Ground reference data from sites throughout the study area are collected for training and validation. The land cover information was extracted from the digital data using PCI Geomatica 10.3.2 software package. The variations in classification accuracy due to a number of radar imaging processing techniques are studied. The relationship between the processing window and the land classification is also investigated. The classification accuracies from the optical and radar feature combinations are studied. Our research finds that fusion of radar and optical significantly improved classification accuracies. This study indicates that the land cover/use can be mapped accurately by using this approach

  10. STANDARDIZING QUALITY ASSESSMENT OF FUSED REMOTELY SENSED IMAGES

    Directory of Open Access Journals (Sweden)

    C. Pohl

    2017-09-01

    Full Text Available The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.

  11. Standardizing Quality Assessment of Fused Remotely Sensed Images

    Science.gov (United States)

    Pohl, C.; Moellmann, J.; Fries, K.

    2017-09-01

    The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.

  12. a New Graduation Algorithm for Color Balance of Remote Sensing Image

    Science.gov (United States)

    Zhou, G.; Liu, X.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Pan, Q.

    2018-05-01

    In order to expand the field of view to obtain more data and information when doing research on remote sensing image, workers always need to mosaicking images together. However, the image after mosaic always has the large color differences and produces the gap line. This paper based on the graduation algorithm of tarigonometric function proposed a new algorithm of Two Quarter-rounds Curves (TQC). The paper uses the Gaussian filter to solve the program about the image color noise and the gap line. The paper used one of Greenland compiled data acquired in 1963 from Declassified Intelligence Photography Project (DISP) by ARGON KH-5 satellite, and used the photography of North Gulf, China, by Landsat satellite to experiment. The experimental results show that the proposed method has improved the accuracy of the results in two parts: on the one hand, for the large color differences remote sensing image will become more balanced. On the other hands, the remote sensing image will achieve more smooth transition.

  13. Minimisation de fonctions de perte calibrée pour la classification des images

    OpenAIRE

    Bel Haj Ali , Wafa

    2013-01-01

    Image classification becomes a big challenge since it concerns on the one hand millions or billions of images that are available on the web and on the other hand images used for critical real-time applications. This classification involves in general learning methods and classifiers that must require both precision as well as speed performance. These learning problems concern a large number of application areas: namely, web applications (profiling, targeting, social networks, search engines),...

  14. A novel airport extraction model based on saliency region detection for high spatial resolution remote sensing images

    Science.gov (United States)

    Lv, Wen; Zhang, Libao; Zhu, Yongchun

    2017-06-01

    The airport is one of the most crucial traffic facilities in military and civil fields. Automatic airport extraction in high spatial resolution remote sensing images has many applications such as regional planning and military reconnaissance. Traditional airport extraction strategies usually base on prior knowledge and locate the airport target by template matching and classification, which will cause high computation complexity and large costs of computing resources for high spatial resolution remote sensing images. In this paper, we propose a novel automatic airport extraction model based on saliency region detection, airport runway extraction and adaptive threshold segmentation. In saliency region detection, we choose frequency-tuned (FT) model for computing airport saliency using low level features of color and luminance that is easy and fast to implement and can provide full-resolution saliency maps. In airport runway extraction, Hough transform is adopted to count the number of parallel line segments. In adaptive threshold segmentation, the Otsu threshold segmentation algorithm is proposed to obtain more accurate airport regions. The experimental results demonstrate that the proposed model outperforms existing saliency analysis models and shows good performance in the extraction of the airport.

  15. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor

    Directory of Open Access Journals (Sweden)

    Tuyen Danh Pham

    2018-02-01

    Full Text Available In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN. Experimental results on the banknote image databases of the Korean won (KRW and the Indian rupee (INR with three fitness levels, and the Unites States dollar (USD with two fitness levels, showed that our method gives better classification accuracy than other methods.

  16. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor.

    Science.gov (United States)

    Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung

    2018-02-06

    In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods.

  17. UNMANNED AERIAL VEHICLE (UAV) HYPERSPECTRAL REMOTE SENSING FOR DRYLAND VEGETATION MONITORING

    Energy Technology Data Exchange (ETDEWEB)

    Nancy F. Glenn; Jessica J. Mitchell; Matthew O. Anderson; Ryan C. Hruska

    2012-06-01

    UAV-based hyperspectral remote sensing capabilities developed by the Idaho National Lab and Idaho State University, Boise Center Aerospace Lab, were recently tested via demonstration flights that explored the influence of altitude on geometric error, image mosaicking, and dryland vegetation classification. The test flights successfully acquired usable flightline data capable of supporting classifiable composite images. Unsupervised classification results support vegetation management objectives that rely on mapping shrub cover and distribution patterns. Overall, supervised classifications performed poorly despite spectral separability in the image-derived endmember pixels. Future mapping efforts that leverage ground reference data, ultra-high spatial resolution photos and time series analysis should be able to effectively distinguish native grasses such as Sandberg bluegrass (Poa secunda), from invasives such as burr buttercup (Ranunculus testiculatus) and cheatgrass (Bromus tectorum).

  18. Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot

    Directory of Open Access Journals (Sweden)

    Yajing Shen

    2015-12-01

    Full Text Available Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.

  19. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  20. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  1. Noise estimation for remote sensing image data analysis

    Science.gov (United States)

    Du, Qian

    2004-01-01

    Noise estimation does not receive much attention in remote sensing society. It may be because normally noise is not large enough to impair image analysis result. Noise estimation is also very challenging due to the randomness nature of the noise (for random noise) and the difficulty of separating the noise component from the signal in each specific location. We review and propose seven different types of methods to estimate noise variance and noise covariance matrix in a remotely sensed image. In the experiment, it is demonstrated that a good noise estimate can improve the performance of an algorithm via noise whitening if this algorithm assumes white noise.

  2. Classification and Recognition of Tomb Information in Hyperspectral Image

    Science.gov (United States)

    Gu, M.; Lyu, S.; Hou, M.; Ma, S.; Gao, Z.; Bai, S.; Zhou, P.

    2018-04-01

    There are a large number of materials with important historical information in ancient tombs. However, in many cases, these substances could become obscure and indistinguishable by human naked eye or true colour camera. In order to classify and identify materials in ancient tomb effectively, this paper applied hyperspectral imaging technology to archaeological research of ancient tomb in Shanxi province. Firstly, the feature bands including the main information at the bottom of the ancient tomb are selected by the Principal Component Analysis (PCA) transformation to realize the data dimension. Then, the image classification was performed using Support Vector Machine (SVM) based on feature bands. Finally, the material at the bottom of ancient tomb is identified by spectral analysis and spectral matching. The results show that SVM based on feature bands can not only ensure the classification accuracy, but also shorten the data processing time and improve the classification efficiency. In the material identification, it is found that the same matter identified in the visible light is actually two different substances. This research result provides a new reference and research idea for archaeological work.

  3. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    Benkirane, A.; Auger, G.; Chbihi, A.; Bloyet, D.; Plagnol, E.

    1994-01-01

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append

  4. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Benkirane, A; Auger, G; Chbihi, A [Grand Accelerateur National d` Ions Lourds (GANIL), 14 - Caen (France); Bloyet, D [Caen Univ., 14 (France); Plagnol, E [Paris-11 Univ., 91 - Orsay (France). Inst. de Physique Nucleaire

    1994-12-31

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ``classical`` automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append.

  5. Morphological images analysis and chromosomic aberrations classification based on fuzzy logic

    International Nuclear Information System (INIS)

    Souza, Leonardo Peres

    2011-01-01

    This work has implemented a methodology for automation of images analysis of chromosomes of human cells irradiated at IEA-R1 nuclear reactor (located at IPEN, Sao Paulo, Brazil), and therefore subject to morphological aberrations. This methodology intends to be a tool for helping cytogeneticists on identification, characterization and classification of chromosomal metaphasic analysis. The methodology development has included the creation of a software application based on artificial intelligence techniques using Fuzzy Logic combined with image processing techniques. The developed application was named CHRIMAN and is composed of modules that contain the methodological steps which are important requirements in order to achieve an automated analysis. The first step is the standardization of the bi-dimensional digital image acquisition procedure through coupling a simple digital camera to the ocular of the conventional metaphasic analysis microscope. Second step is related to the image treatment achieved through digital filters application; storing and organization of information obtained both from image content itself, and from selected extracted features, for further use on pattern recognition algorithms. The third step consists on characterizing, counting and classification of stored digital images and extracted features information. The accuracy in the recognition of chromosome images is 93.9%. This classification is based on classical standards obtained at Buckton [1973], and enables support to geneticist on chromosomic analysis procedure, decreasing analysis time, and creating conditions to include this method on a broader evaluation system on human cell damage due to ionizing radiation exposure. (author)

  6. Crop status sensing system by multi-spectral imaging sensor, 1: Image processing and paddy field sensing

    International Nuclear Information System (INIS)

    Ishii, K.; Sugiura, R.; Fukagawa, T.; Noguchi, N.; Shibata, Y.

    2006-01-01

    The objective of the study is to construct a sensing system for precision farming. A Multi-Spectral Imaging Sensor (MSIS), which can obtain three images (G. R and NIR) simultaneously, was used for detecting growth status of plants. The sensor was mounted on an unmanned helicopter. An image processing method for acquiring information of crop status with high accuracy was developed. Crop parameters that were measured include SPAD, leaf height, and stems number. Both direct seeding variety and transplant variety of paddy rice were adopted in the research. The result of a field test showed that crop status of both varieties could be detected with sufficient accuracy to apply to precision farming

  7. Images of a Loving God and Sense of Meaning in Life

    Science.gov (United States)

    Stroope, Samuel; Draper, Scott; Whitehead, Andrew L.

    2013-01-01

    Although prior studies have documented a positive association between religiosity and sense of meaning in life, the role of specific religious beliefs is currently unclear. Past research on images of God suggests that loving images of God will positively correlate with a sense of meaning and purpose. Mechanisms for this hypothesized relationship…

  8. Multiview vector-valued manifold regularization for multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Xu, Chang; Xu, Chao; Liu, Hong; Wen, Yonggang

    2013-05-01

    In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV(3)MR) to integrate multiple features. MV(3)MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV(3)MR for image classification.

  9. Exploitation of commercial remote sensing images: reality ignored?

    Science.gov (United States)

    Allen, Paul C.

    1999-12-01

    The remote sensing market is on the verge of being awash in commercial high-resolution images. Market estimates are based on the growing numbers of planned commercial remote sensing electro-optical, radar, and hyperspectral satellites and aircraft. EarthWatch, Space Imaging, SPOT, and RDL among others are all working towards launch and service of one to five meter panchromatic or radar-imaging satellites. Additionally, new advances in digital air surveillance and reconnaissance systems, both manned and unmanned, are also expected to expand the geospatial customer base. Regardless of platform, image type, or location, each system promises images with some combination of increased resolution, greater spectral coverage, reduced turn-around time (request-to- delivery), and/or reduced image cost. For the most part, however, market estimates for these new sources focus on the raw digital images (from collection to the ground station) while ignoring the requirements for a processing and exploitation infrastructure comprised of exploitation tools, exploitation training, library systems, and image management systems. From this it would appear the commercial imaging community has failed to learn the hard lessons of national government experience choosing instead to ignore reality and replicate the bias of collection over processing and exploitation. While this trend may be not impact the small quantity users that exist today it will certainly adversely affect the mid- to large-sized users of the future.

  10. Proceedings of the Eleventh International Symposium on Remote Sensing of Environment, volume 2. [application and processing of remotely sensed data

    Science.gov (United States)

    1977-01-01

    Application and processing of remotely sensed data are discussed. Areas of application include: pollution monitoring, water quality, land use, marine resources, ocean surface properties, and agriculture. Image processing and scene analysis are described along with automated photointerpretation and classification techniques. Data from infrared and multispectral band scanners onboard LANDSAT satellites are emphasized.

  11. Segmentation and Classification of Burn Color Images

    Science.gov (United States)

    2001-10-25

    SEGMENTATION AND CLASSIFICATION OF BURN COLOR IMAGES Begoña Acha1, Carmen Serrano1, Laura Roa2 1Área de Teoría de la Señal y Comunicaciones ...2000, Las Vegas (USA), pp. 411-415. [21] G. Wyszecki and W.S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae (New

  12. Segmentation and Classification of Burn Color Images

    National Research Council Canada - National Science Library

    Acha, Begonya

    2001-01-01

    .... In the classification part, we take advantage of color information by clustering, with a vector quantization algorithm, the color centroids of small squares, taken from the burnt segmented part of the image, in the (V1, V2) plane into two possible groups, where V1 and V2 are the two chrominance components of the CIE Lab representation.

  13. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  14. A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification

    Directory of Open Access Journals (Sweden)

    Yunlong Yu

    2018-01-01

    Full Text Available One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.

  15. Comparing and optimizing land use classification in a Himalayan area using parametric and non parametric approaches

    NARCIS (Netherlands)

    Sterk, G.; Sameer Saran,; Raju, P.L.N.; Amit, Bharti

    2007-01-01

    Supervised classification is one of important tasks in remote sensing image interpretation, in which the image pixels are classified to various predefined land use/land cover classes based on the spectral reflectance values in different bands. In reality some classes may have very close spectral

  16. Thyroid Nodule Classification in Ultrasound Images by Fine-Tuning Deep Convolutional Neural Network.

    Science.gov (United States)

    Chi, Jianning; Walia, Ekta; Babyn, Paul; Wang, Jimmy; Groot, Gary; Eramian, Mark

    2017-08-01

    With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into "malignant" and "benign" cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.

  17. Online Hashing for Scalable Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Peng Li

    2018-05-01

    Full Text Available Recently, hashing-based large-scale remote sensing (RS image retrieval has attracted much attention. Many new hashing algorithms have been developed and successfully applied to fast RS image retrieval tasks. However, there exists an important problem rarely addressed in the research literature of RS image hashing. The RS images are practically produced in a streaming manner in many real-world applications, which means the data distribution keeps changing over time. Most existing RS image hashing methods are batch-based models whose hash functions are learned once for all and kept fixed all the time. Therefore, the pre-trained hash functions might not fit the ever-growing new RS images. Moreover, the batch-based models have to load all the training images into memory for model learning, which consumes many computing and memory resources. To address the above deficiencies, we propose a new online hashing method, which learns and adapts its hashing functions with respect to the newly incoming RS images in terms of a novel online partial random learning scheme. Our hash model is updated in a sequential mode such that the representative power of the learned binary codes for RS images are improved accordingly. Moreover, benefiting from the online learning strategy, our proposed hashing approach is quite suitable for scalable real-world remote sensing image retrieval. Extensive experiments on two large-scale RS image databases under online setting demonstrated the efficacy and effectiveness of the proposed method.

  18. Classification of time-series images using deep convolutional neural networks

    Science.gov (United States)

    Hatami, Nima; Gavet, Yann; Debayle, Johan

    2018-04-01

    Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.

  19. Study on Classification Accuracy Inspection of Land Cover Data Aided by Automatic Image Change Detection Technology

    Science.gov (United States)

    Xie, W.-J.; Zhang, L.; Chen, H.-P.; Zhou, J.; Mao, W.-J.

    2018-04-01

    The purpose of carrying out national geographic conditions monitoring is to obtain information of surface changes caused by human social and economic activities, so that the geographic information can be used to offer better services for the government, enterprise and public. Land cover data contains detailed geographic conditions information, thus has been listed as one of the important achievements in the national geographic conditions monitoring project. At present, the main issue of the production of the land cover data is about how to improve the classification accuracy. For the land cover data quality inspection and acceptance, classification accuracy is also an important check point. So far, the classification accuracy inspection is mainly based on human-computer interaction or manual inspection in the project, which are time consuming and laborious. By harnessing the automatic high-resolution remote sensing image change detection technology based on the ERDAS IMAGINE platform, this paper carried out the classification accuracy inspection test of land cover data in the project, and presented a corresponding technical route, which includes data pre-processing, change detection, result output and information extraction. The result of the quality inspection test shows the effectiveness of the technical route, which can meet the inspection needs for the two typical errors, that is, missing and incorrect update error, and effectively reduces the work intensity of human-computer interaction inspection for quality inspectors, and also provides a technical reference for the data production and quality control of the land cover data.

  20. Improving the Computational Performance of Ontology-Based Classification Using Graph Databases

    Directory of Open Access Journals (Sweden)

    Thomas J. Lampoltshammer

    2015-07-01

    Full Text Available The increasing availability of very high-resolution remote sensing imagery (i.e., from satellites, airborne laser scanning, or aerial photography represents both a blessing and a curse for researchers. The manual classification of these images, or other similar geo-sensor data, is time-consuming and leads to subjective and non-deterministic results. Due to this fact, (semi- automated classification approaches are in high demand in affected research areas. Ontologies provide a proper way of automated classification for various kinds of sensor data, including remotely sensed data. However, the processing of data entities—so-called individuals—is one of the most cost-intensive computational operations within ontology reasoning. Therefore, an approach based on graph databases is proposed to overcome the issue of a high time consumption regarding the classification task. The introduced approach shifts the classification task from the classical Protégé environment and its common reasoners to the proposed graph-based approaches. For the validation, the authors tested the approach on a simulation scenario based on a real-world example. The results demonstrate a quite promising improvement of classification speed—up to 80,000 times faster than the Protégé-based approach.

  1. Optimized computational imaging methods for small-target sensing in lens-free holographic microscopy

    Science.gov (United States)

    Xiong, Zhen; Engle, Isaiah; Garan, Jacob; Melzer, Jeffrey E.; McLeod, Euan

    2018-02-01

    Lens-free holographic microscopy is a promising diagnostic approach because it is cost-effective, compact, and suitable for point-of-care applications, while providing high resolution together with an ultra-large field-of-view. It has been applied to biomedical sensing, where larger targets like eukaryotic cells, bacteria, or viruses can be directly imaged without labels, and smaller targets like proteins or DNA strands can be detected via scattering labels like micro- or nano-spheres. Automated image processing routines can count objects and infer target concentrations. In these sensing applications, sensitivity and specificity are critically affected by image resolution and signal-to-noise ratio (SNR). Pixel super-resolution approaches have been shown to boost resolution and SNR by synthesizing a high-resolution image from multiple, partially redundant, low-resolution images. However, there are several computational methods that can be used to synthesize the high-resolution image, and previously, it has been unclear which methods work best for the particular case of small-particle sensing. Here, we quantify the SNR achieved in small-particle sensing using regularized gradient-descent optimization method, where the regularization is based on cardinal-neighbor differences, Bayer-pattern noise reduction, or sparsity in the image. In particular, we find that gradient-descent with sparsity-based regularization works best for small-particle sensing. These computational approaches were evaluated on images acquired using a lens-free microscope that we assembled from an off-the-shelf LED array and color image sensor. Compared to other lens-free imaging systems, our hardware integration, calibration, and sample preparation are particularly simple. We believe our results will help to enable the best performance in lens-free holographic sensing.

  2. An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation

    Science.gov (United States)

    Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.

    2015-01-01

    Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.

  3. Tactile surface classification for limbed robots using a pressure sensitive robot skin

    International Nuclear Information System (INIS)

    Shill, Jacob J; Collins Jr, Emmanuel G; Coyle, Eric; Clark, Jonathan

    2015-01-01

    This paper describes an approach to terrain identification based on pressure images generated through direct surface contact using a robot skin constructed around a high-resolution pressure sensing array. Terrain signatures for classification are formulated from the magnitude frequency responses of the pressure images. The initial experimental results for statically obtained images show that the approach yields classification accuracies >98%. The methodology is extended to accommodate the dynamic pressure images anticipated when a robot is walking or running. Experiments with a one-legged hopping robot yield similar identification accuracies ≈99%. In addition, the accuracies are independent with respect to changing robot dynamics (i.e., when using different leg gaits). The paper further shows that the high-resolution capabilities of the sensor enables similarly textured surfaces to be distinguished. A correcting filter is developed to accommodate for failures or faults that inevitably occur within the sensing array with continued use. Experimental results show using the correcting filter can extend the effective operational lifespan of a high-resolution sensing array over 6x in the presence of sensor damage. The results presented suggest this methodology can be extended to autonomous field robots, providing a robot with crucial information about the environment that can be used to aid stable and efficient mobility over rough and varying terrains. (paper)

  4. Feature Extraction and Classification on Esophageal X-Ray Images of Xinjiang Kazak Nationality

    Directory of Open Access Journals (Sweden)

    Fang Yang

    2017-01-01

    Full Text Available Esophageal cancer is one of the fastest rising types of cancers in China. The Kazak nationality is the highest-risk group in Xinjiang. In this work, an effective computer-aided diagnostic system is developed to assist physicians in interpreting digital X-ray image features and improving the quality of diagnosis. The modules of the proposed system include image preprocessing, feature extraction, feature selection, image classification, and performance evaluation. 300 original esophageal X-ray images were resized to a region of interest and then enhanced by the median filter and histogram equalization method. 37 features from textural, frequency, and complexity domains were extracted. Both sequential forward selection and principal component analysis methods were employed to select the discriminative features for classification. Then, support vector machine and K-nearest neighbors were applied to classify the esophageal cancer images with respect to their specific types. The classification performance was evaluated in terms of the area under the receiver operating characteristic curve, accuracy, precision, and recall, respectively. Experimental results show that the classification performance of the proposed system outperforms the conventional visual inspection approaches in terms of diagnostic quality and processing time. Therefore, the proposed computer-aided diagnostic system is promising for the diagnostics of esophageal cancer.

  5. A new web-based system for unsupervised classification of satellite images from the Google Maps engine

    Science.gov (United States)

    Ferrán, Ángel; Bernabé, Sergio; García-Rodríguez, Pablo; Plaza, Antonio

    2012-10-01

    In this paper, we develop a new web-based system for unsupervised classification of satellite images available from the Google Maps engine. The system has been developed using the Google Maps API and incorporates functionalities such as unsupervised classification of image portions selected by the user (at the desired zoom level). For this purpose, we use a processing chain made up of the well-known ISODATA and k-means algorithms, followed by spatial post-processing based on majority voting. The system is currently hosted on a high performance server which performs the execution of classification algorithms and returns the obtained classification results in a very efficient way. The previous functionalities are necessary to use efficient techniques for the classification of images and the incorporation of content-based image retrieval (CBIR). Several experimental validation types of the classification results with the proposed system are performed by comparing the classification accuracy of the proposed chain by means of techniques available in the well-known Environment for Visualizing Images (ENVI) software package. The server has access to a cluster of commodity graphics processing units (GPUs), hence in future work we plan to perform the processing in parallel by taking advantage of the cluster.

  6. Polarimetric SAR Image Classification Using Multiple-feature Fusion and Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Sun Xun

    2016-12-01

    Full Text Available In this paper, we propose a supervised classification algorithm for Polarimetric Synthetic Aperture Radar (PolSAR images using multiple-feature fusion and ensemble learning. First, we extract different polarimetric features, including extended polarimetric feature space, Hoekman, Huynen, H/alpha/A, and fourcomponent scattering features of PolSAR images. Next, we randomly select two types of features each time from all feature sets to guarantee the reliability and diversity of later ensembles and use a support vector machine as the basic classifier for predicting classification results. Finally, we concatenate all prediction probabilities of basic classifiers as the final feature representation and employ the random forest method to obtain final classification results. Experimental results at the pixel and region levels show the effectiveness of the proposed algorithm.

  7. Magnetic resonance imaging texture analysis classification of primary breast cancer

    International Nuclear Information System (INIS)

    Waugh, S.A.; Lerski, R.A.; Purdie, C.A.; Jordan, L.B.; Vinnicombe, S.; Martin, P.; Thompson, A.M.

    2016-01-01

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  8. Magnetic resonance imaging texture analysis classification of primary breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Waugh, S.A.; Lerski, R.A. [Ninewells Hospital and Medical School, Department of Medical Physics, Dundee (United Kingdom); Purdie, C.A.; Jordan, L.B. [Ninewells Hospital and Medical School, Department of Pathology, Dundee (United Kingdom); Vinnicombe, S. [University of Dundee, Division of Imaging and Technology, Ninewells Hospital and Medical School, Dundee (United Kingdom); Martin, P. [Ninewells Hospital and Medical School, Department of Clinical Radiology, Dundee (United Kingdom); Thompson, A.M. [University of Texas MD Anderson Cancer Center, Department of Surgical Oncology, Houston, TX (United States)

    2016-02-15

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  9. Land Cover and Land Use Classification with TWOPAC: towards Automated Processing for Pixel- and Object-Based Image Classification

    Directory of Open Access Journals (Sweden)

    Stefan Dech

    2012-09-01

    Full Text Available We present a novel and innovative automated processing environment for the derivation of land cover (LC and land use (LU information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain enables the standardized, independent, user-friendly, and comparable derivation of LC and LU information, with minimized manual classification labor. TWOPAC allows classification of multi-spectral and multi-temporal remote sensing imagery from different sensor types. TWOPAC enables not only pixel-based classification, but also allows classification based on object-based characteristics. Classification is based on a Decision Tree approach (DT for which the well-known C5.0 code has been implemented, which builds decision trees based on the concept of information entropy. TWOPAC enables automatic generation of the decision tree classifier based on a C5.0-retrieved ascii-file, as well as fully automatic validation of the classification output via sample based accuracy assessment.Envisaging the automated generation of standardized land cover products, as well as area-wide classification of large amounts of data in preferably a short processing time, standardized interfaces for process control, Web Processing Services (WPS, as introduced by the Open Geospatial Consortium (OGC, are utilized. TWOPAC’s functionality to process geospatial raster or vector data via web resources (server, network enables TWOPAC’s usability independent of any commercial client or desktop software and allows for large scale data processing on servers. Furthermore, the components of TWOPAC were built-up using open source code components and are implemented as a plug-in for Quantum GIS software for easy handling of the classification process from the user’s perspective.

  10. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    Science.gov (United States)

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    Science.gov (United States)

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  12. Building Extraction in Very High Resolution Remote Sensing Imagery Using Deep Learning and Guided Filters

    Directory of Open Access Journals (Sweden)

    Yongyang Xu

    2018-01-01

    Full Text Available Very high resolution (VHR remote sensing imagery has been used for land cover classification, and it tends to a transition from land-use classification to pixel-level semantic segmentation. Inspired by the recent success of deep learning and the filter method in computer vision, this work provides a segmentation model, which designs an image segmentation neural network based on the deep residual networks and uses a guided filter to extract buildings in remote sensing imagery. Our method includes the following steps: first, the VHR remote sensing imagery is preprocessed and some hand-crafted features are calculated. Second, a designed deep network architecture is trained with the urban district remote sensing image to extract buildings at the pixel level. Third, a guided filter is employed to optimize the classification map produced by deep learning; at the same time, some salt-and-pepper noise is removed. Experimental results based on the Vaihingen and Potsdam datasets demonstrate that our method, which benefits from neural networks and guided filtering, achieves a higher overall accuracy when compared with other machine learning and deep learning methods. The method proposed shows outstanding performance in terms of the building extraction from diversified objects in the urban district.

  13. Accelerated Air-coupled Ultrasound Imaging of Wood Using Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yiming Fang

    2015-12-01

    Full Text Available Air-coupled ultrasound has shown excellent sensitivity and specificity for the nondestructive imaging of wood-based material. However, it is time-consuming, due to the high scanning density limited by the Nyquist law. This study investigated the feasibility of applying compressed sensing techniques to air-coupled ultrasound imaging, aiming to reduce the number of scanning lines and then accelerate the imaging. Firstly, an undersampled scanning strategy specified by a random binary matrix was proposed to address the limitation of the compressed sensing framework. The undersampled scanning can be easily implemented, while only minor modification was required for the existing imaging system. Then, discrete cosine transform was selected experimentally as the representation basis. Finally, orthogonal matching pursuit algorithm was utilized to reconstruct the wood images. Experiments on three real air-coupled ultrasound images indicated the potential of the present method to accelerate air-coupled ultrasound imaging of wood. The same quality of ACU images can be obtained with scanning time cut in half.

  14. Psychophysiological Sensing and State Classification for Attention Management in Commercial Aviation

    Science.gov (United States)

    Harrivel, Angela R.; Liles, Charles; Stephens, Chad L.; Ellis, Kyle K.; Prinzel, Lawrence J.; Pope, Alan T.

    2016-01-01

    Attention-related human performance limiting states (AHPLS) can cause pilots to lose airplane state awareness (ASA), and their detection is important to improving commercial aviation safety. The Commercial Aviation Safety Team found that the majority of recent international commercial aviation accidents attributable to loss of control inflight involved flight crew loss of airplane state awareness, and that distraction of various forms was involved in all of them. Research on AHPLS, including channelized attention, diverted attention, startle / surprise, and confirmation bias, has been recommended in a Safety Enhancement (SE) entitled "Training for Attention Management." To accomplish the detection of such cognitive and psychophysiological states, a broad suite of sensors has been implemented to simultaneously measure their physiological markers during high fidelity flight simulation human subject studies. Pilot participants were asked to perform benchmark tasks and experimental flight scenarios designed to induce AHPLS. Pattern classification was employed to distinguish the AHPLS induced by the benchmark tasks. Unimodal classification using pre-processed electroencephalography (EEG) signals as input features to extreme gradient boosting, random forest and deep neural network multiclass classifiers was implemented. Multi-modal classification using galvanic skin response (GSR) in addition to the same EEG signals and using the same types of classifiers produced increased accuracy with respect to the unimodal case (90 percent vs. 86 percent), although only via the deep neural network classifier. These initial results are a first step toward the goal of demonstrating simultaneous real time classification of multiple states using multiple sensing modalities in high-fidelity flight simulators. This detection is intended to support and inform training methods under development to mitigate the loss of ASA and thus reduce accidents and incidents.

  15. Accelerated two-dimensional cine DENSE cardiovascular magnetic resonance using compressed sensing and parallel imaging.

    Science.gov (United States)

    Chen, Xiao; Yang, Yang; Cai, Xiaoying; Auger, Daniel A; Meyer, Craig H; Salerno, Michael; Epstein, Frederick H

    2016-06-14

    Cine Displacement Encoding with Stimulated Echoes (DENSE) provides accurate quantitative imaging of cardiac mechanics with rapid displacement and strain analysis; however, image acquisition times are relatively long. Compressed sensing (CS) with parallel imaging (PI) can generally provide high-quality images recovered from data sampled below the Nyquist rate. The purposes of the present study were to develop CS-PI-accelerated acquisition and reconstruction methods for cine DENSE, to assess their accuracy for cardiac imaging using retrospective undersampling, and to demonstrate their feasibility for prospectively-accelerated 2D cine DENSE imaging in a single breathhold. An accelerated cine DENSE sequence with variable-density spiral k-space sampling and golden angle rotations through time was implemented. A CS method, Block LOw-rank Sparsity with Motion-guidance (BLOSM), was combined with sensitivity encoding (SENSE) for the reconstruction of under-sampled multi-coil spiral data. Seven healthy volunteers and 7 patients underwent 2D cine DENSE imaging with fully-sampled acquisitions (14-26 heartbeats in duration) and with prospectively rate-2 and rate-4 accelerated acquisitions (14 and 8 heartbeats in duration). Retrospectively- and prospectively-accelerated data were reconstructed using BLOSM-SENSE and SENSE. Image quality of retrospectively-undersampled data was quantified using the relative root mean square error (rRMSE). Myocardial displacement and circumferential strain were computed for functional assessment, and linear correlation and Bland-Altman analyses were used to compare accelerated acquisitions to fully-sampled reference datasets. For retrospectively-undersampled data, BLOSM-SENSE provided similar or lower rRMSE at rate-2 and lower rRMSE at rate-4 acceleration compared to SENSE (p cine DENSE provided good image quality and expected values of displacement and strain. BLOSM-SENSE-accelerated spiral cine DENSE imaging with 2D displacement encoding can be

  16. Evidential analysis of difference images for change detection of multitemporal remote sensing images

    Science.gov (United States)

    Chen, Yin; Peng, Lijuan; Cremers, Armin B.

    2018-03-01

    In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.

  17. A comparison of autonomous techniques for multispectral image analysis and classification

    Science.gov (United States)

    Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso

    2012-10-01

    Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.

  18. Rule-based land cover classification from very high-resolution satellite image with multiresolution segmentation

    Science.gov (United States)

    Haque, Md. Enamul; Al-Ramadan, Baqer; Johnson, Brian A.

    2016-07-01

    Multiresolution segmentation and rule-based classification techniques are used to classify objects from very high-resolution satellite images of urban areas. Custom rules are developed using different spectral, geometric, and textural features with five scale parameters, which exploit varying classification accuracy. Principal component analysis is used to select the most important features out of a total of 207 different features. In particular, seven different object types are considered for classification. The overall classification accuracy achieved for the rule-based method is 95.55% and 98.95% for seven and five classes, respectively. Other classifiers that are not using rules perform at 84.17% and 97.3% accuracy for seven and five classes, respectively. The results exploit coarse segmentation for higher scale parameter and fine segmentation for lower scale parameter. The major contribution of this research is the development of rule sets and the identification of major features for satellite image classification where the rule sets are transferable and the parameters are tunable for different types of imagery. Additionally, the individual objectwise classification and principal component analysis help to identify the required object from an arbitrary number of objects within images given ground truth data for the training.

  19. SVM Pixel Classification on Colour Image Segmentation

    Science.gov (United States)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  20. CLASSIFICATION AND RECOGNITION OF TOMB INFORMATION IN HYPERSPECTRAL IMAGE

    Directory of Open Access Journals (Sweden)

    M. Gu

    2018-04-01

    Full Text Available There are a large number of materials with important historical information in ancient tombs. However, in many cases, these substances could become obscure and indistinguishable by human naked eye or true colour camera. In order to classify and identify materials in ancient tomb effectively, this paper applied hyperspectral imaging technology to archaeological research of ancient tomb in Shanxi province. Firstly, the feature bands including the main information at the bottom of the ancient tomb are selected by the Principal Component Analysis (PCA transformation to realize the data dimension. Then, the image classification was performed using Support Vector Machine (SVM based on feature bands. Finally, the material at the bottom of ancient tomb is identified by spectral analysis and spectral matching. The results show that SVM based on feature bands can not only ensure the classification accuracy, but also shorten the data processing time and improve the classification efficiency. In the material identification, it is found that the same matter identified in the visible light is actually two different substances. This research result provides a new reference and research idea for archaeological work.

  1. Crop classification based on multi-temporal satellite remote sensing data for agro-advisory services

    Science.gov (United States)

    Karale, Yogita; Mohite, Jayant; Jagyasi, Bhushan

    2014-11-01

    In this paper, we envision the use of satellite images coupled with GIS to obtain location specific crop type information in order to disseminate crop specific advises to the farmers. In our ongoing mKRISHI R project, the accurate information about the field level crop type and acreage will help in the agro-advisory services and supply chain planning and management. The key contribution of this paper is the field level crop classification using multi temporal images of Landsat-8 acquired during November 2013 to April 2014. The study area chosen is Vani, Maharashtra, India, from where the field level ground truth information for various crops such as grape, wheat, onion, soybean, tomato, along with fodder and fallow fields has been collected using the mobile application. The ground truth information includes crop type, crop stage and GPS location for 104 farms in the study area with approximate area of 42 hectares. The seven multi-temporal images of the Landsat-8 were used to compute the vegetation indices namely: Normalized Difference Vegetation Index (NDVI), Simple Ratio (SR) and Difference Vegetation Index (DVI) for the study area. The vegetation indices values of the pixels within a field were then averaged to obtain the field level vegetation indices. For each crop, binary classification has been carried out using the feed forward neural network operating on the field level vegetation indices. The classification accuracy for the individual crop was in the range of 74.5% to 97.5% and the overall classification accuracy was found to be 88.49%.

  2. Effects on MR images compression in tissue classification quality

    International Nuclear Information System (INIS)

    Santalla, H; Meschino, G; Ballarin, V

    2007-01-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of 'quality' is essential. What we understand for 'quality'? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images

  3. Mid-level image representations for real-time heart view plane classification of echocardiograms.

    Science.gov (United States)

    Penatti, Otávio A B; Werneck, Rafael de O; de Almeida, Waldir R; Stein, Bernardo V; Pazinato, Daniel V; Mendes Júnior, Pedro R; Torres, Ricardo da S; Rocha, Anderson

    2015-11-01

    In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Fuzzy C-means classification for corrosion evolution of steel images

    Science.gov (United States)

    Trujillo, Maite; Sadki, Mustapha

    2004-05-01

    An unavoidable problem of metal structures is their exposure to rust degradation during their operational life. Thus, the surfaces need to be assessed in order to avoid potential catastrophes. There is considerable interest in the use of patch repair strategies which minimize the project costs. However, to operate such strategies with confidence in the long useful life of the repair, it is essential that the condition of the existing coatings and the steel substrate can be accurately quantified and classified. This paper describes the application of fuzzy set theory for steel surfaces classification according to the steel rust time. We propose a semi-automatic technique to obtain image clustering using the Fuzzy C-means (FCM) algorithm and we analyze two kinds of data to study the classification performance. Firstly, we investigate the use of raw images" pixels without any pre-processing methods and neighborhood pixels. Secondly, we apply Gaussian noise to the images with different standard deviation to study the FCM method tolerance to Gaussian noise. The noisy images simulate the possible perturbations of the images due to the weather or rust deposits in the steel surfaces during typical on-site acquisition procedures

  5. Kingfisher: a system for remote sensing image database management

    Science.gov (United States)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  6. Kent mixture model for classification of remote sensing data on spherical manifolds

    CSIR Research Space (South Africa)

    Lunga, D

    2011-10-01

    Full Text Available Modern remote sensing imaging sensor technology provides detailed spectral and spatial information that enables precise analysis of land cover usage. From a research point of view, traditional widely used statistical models are often limited...

  7. ASSESSMENT OF LANDSCAPE CHARACTERISTICS ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    Science.gov (United States)

    Landscape characteristics such as small patch size and land cover heterogeneity have been hypothesized to increase the likelihood of misclassifying pixels during thematic image classification. However, there has been a lack of empirical evidence, to support these hypotheses. This...

  8. Multi-material classification of dry recyclables from municipal solid waste based on thermal imaging.

    Science.gov (United States)

    Gundupalli, Sathish Paulraj; Hait, Subrata; Thakur, Atul

    2017-12-01

    There has been a significant rise in municipal solid waste (MSW) generation in the last few decades due to rapid urbanization and industrialization. Due to the lack of source segregation practice, a need for automated segregation of recyclables from MSW exists in the developing countries. This paper reports a thermal imaging based system for classifying useful recyclables from simulated MSW sample. Experimental results have demonstrated the possibility to use thermal imaging technique for classification and a robotic system for sorting of recyclables in a single process step. The reported classification system yields an accuracy in the range of 85-96% and is comparable with the existing single-material recyclable classification techniques. We believe that the reported thermal imaging based system can emerge as a viable and inexpensive large-scale classification-cum-sorting technology in recycling plants for processing MSW in developing countries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Uniform competency-based local feature extraction for remote sensing images

    Science.gov (United States)

    Sedaghat, Amin; Mohammadi, Nazila

    2018-01-01

    Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.

  10. Astrophysical Information from Objective Prism Digitized Images: Classification with an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Bratsolis Emmanuel

    2005-01-01

    Full Text Available Stellar spectral classification is not only a tool for labeling individual stars but is also useful in studies of stellar population synthesis. Extracting the physical quantities from the digitized spectral plates involves three main stages: detection, extraction, and classification of spectra. Low-dispersion objective prism images have been used and automated methods have been developed. The detection and extraction problems have been presented in previous works. In this paper, we present a classification method based on an artificial neural network (ANN. We make a brief presentation of the entire automated system and we compare the new classification method with the previously used method of maximum correlation coefficient (MCC. Digitized photographic material has been used here. The method can also be used on CCD spectral images.

  11. Histological image classification using biologically interpretable shape-based features

    International Nuclear Information System (INIS)

    Kothari, Sonal; Phan, John H; Young, Andrew N; Wang, May D

    2013-01-01

    Automatic cancer diagnostic systems based on histological image classification are important for improving therapeutic decisions. Previous studies propose textural and morphological features for such systems. These features capture patterns in histological images that are useful for both cancer grading and subtyping. However, because many of these features lack a clear biological interpretation, pathologists may be reluctant to adopt these features for clinical diagnosis. We examine the utility of biologically interpretable shape-based features for classification of histological renal tumor images. Using Fourier shape descriptors, we extract shape-based features that capture the distribution of stain-enhanced cellular and tissue structures in each image and evaluate these features using a multi-class prediction model. We compare the predictive performance of the shape-based diagnostic model to that of traditional models, i.e., using textural, morphological and topological features. The shape-based model, with an average accuracy of 77%, outperforms or complements traditional models. We identify the most informative shapes for each renal tumor subtype from the top-selected features. Results suggest that these shapes are not only accurate diagnostic features, but also correlate with known biological characteristics of renal tumors. Shape-based analysis of histological renal tumor images accurately classifies disease subtypes and reveals biologically insightful discriminatory features. This method for shape-based analysis can be extended to other histological datasets to aid pathologists in diagnostic and therapeutic decisions

  12. Quantitative analysis and classification of AFM images of human hair.

    Science.gov (United States)

    Gurden, S P; Monteiro, V F; Longo, E; Ferreira, M M C

    2004-07-01

    The surface topography of human hair, as defined by the outer layer of cellular sheets, termed cuticles, largely determines the cosmetic properties of the hair. The condition of the cuticles is of great cosmetic importance, but also has the potential to aid diagnosis in the medical and forensic sciences. Atomic force microscopy (AFM) has been demonstrated to offer unique advantages for analysis of the hair surface, mainly due to the high image resolution and the ease of sample preparation. This article presents an algorithm for the automatic analysis of AFM images of human hair. The cuticular structure is characterized using a series of descriptors, such as step height, tilt angle and cuticle density, allowing quantitative analysis and comparison of different images. The usefulness of this approach is demonstrated by a classification study. Thirty-eight AFM images were measured, consisting of hair samples from (a) untreated and bleached hair samples, and (b) the root and distal ends of the hair fibre. The multivariate classification technique partial least squares discriminant analysis is used to test the ability of the algorithm to characterize the images according to the properties of the hair samples. Most of the images (86%) were found to be classified correctly.

  13. Textural features for image classification

    Science.gov (United States)

    Haralick, R. M.; Dinstein, I.; Shanmugam, K.

    1973-01-01

    Description of some easily computable textural features based on gray-tone spatial dependances, and illustration of their application in category-identification tasks of three different kinds of image data - namely, photomicrographs of five kinds of sandstones, 1:20,000 panchromatic aerial photographs of eight land-use categories, and ERTS multispectral imagery containing several land-use categories. Two kinds of decision rules are used - one for which the decision regions are convex polyhedra (a piecewise-linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89% for the photomicrographs, 82% for the aerial photographic imagery, and 83% for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

  14. Correspondence normalized ghost imaging on compressive sensing

    International Nuclear Information System (INIS)

    Zhao Sheng-Mei; Zhuang Peng

    2014-01-01

    Ghost imaging (GI) offers great potential with respect to conventional imaging techniques. It is an open problem in GI systems that a long acquisition time is be required for reconstructing images with good visibility and signal-to-noise ratios (SNRs). In this paper, we propose a new scheme to get good performance with a shorter construction time. We call it correspondence normalized ghost imaging based on compressive sensing (CCNGI). In the scheme, we enhance the signal-to-noise performance by normalizing the reference beam intensity to eliminate the noise caused by laser power fluctuations, and reduce the reconstruction time by using both compressive sensing (CS) and time-correspondence imaging (CI) techniques. It is shown that the qualities of the images have been improved and the reconstruction time has been reduced using CCNGI scheme. For the two-grayscale ''double-slit'' image, the mean square error (MSE) by GI and the normalized GI (NGI) schemes with the measurement number of 5000 are 0.237 and 0.164, respectively, and that is 0.021 by CCNGI scheme with 2500 measurements. For the eight-grayscale ''lena'' object, the peak signal-to-noise rates (PSNRs) are 10.506 and 13.098, respectively using GI and NGI schemes while the value turns to 16.198 using CCNGI scheme. The results also show that a high-fidelity GI reconstruction has been achieved using only 44% of the number of measurements corresponding to the Nyquist limit for the two-grayscale “double-slit'' object. The qualities of the reconstructed images using CCNGI are almost the same as those from GI via sparsity constraints (GISC) with a shorter reconstruction time. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  15. AUTOMATED CLASSIFICATION AND SEGREGATION OF BRAIN MRI IMAGES INTO IMAGES CAPTURED WITH RESPECT TO VENTRICULAR REGION AND EYE-BALL REGION

    Directory of Open Access Journals (Sweden)

    C. Arunkumar

    2014-05-01

    Full Text Available Magnetic Resonance Imaging (MRI images of the brain are used for detection of various brain diseases including tumor. In such cases, classification of MRI images captured with respect to ventricular and eye ball regions helps in automated location and classification of such diseases. The methods employed in the paper can segregate the given MRI images of brain into images of brain captured with respect to ventricular region and images of brain captured with respect to eye ball region. First, the given MRI image of brain is segmented using Particle Swarm Optimization (PSO algorithm, which is an optimized algorithm for MRI image segmentation. The algorithm proposed in the paper is then applied on the segmented image. The algorithm detects whether the image consist of a ventricular region or an eye ball region and classifies it accordingly.

  16. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren

    2014-01-01

    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  17. A Public Image Database for Benchmark of Plant Seedling Classification Algorithms

    DEFF Research Database (Denmark)

    Giselsson, Thomas Mosgaard; Nyholm Jørgensen, Rasmus; Jensen, Peter Kryger

    A database of images of approximately 960 unique plants belonging to 12 species at several growth stages is made publicly available. It comprises annotated RGB images with a physical resolution of roughly 10 pixels per mm. To standardise the evaluation of classification results obtained...

  18. Photogrammetry and Remote Sensing: New German Standards (din) Setting Quality Requirements of Products Generated by Digital Cameras, Pan-Sharpening and Classification

    Science.gov (United States)

    Reulke, R.; Baltrusch, S.; Brunn, A.; Komp, K.; Kresse, W.; von Schönermark, M.; Spreckels, V.

    2012-08-01

    10 years after the first introduction of a digital airborne mapping camera in the ISPRS conference 2000 in Amsterdam, several digital cameras are now available. They are well established in the market and have replaced the analogue camera. A general improvement in image quality accompanied the digital camera development. The signal-to-noise ratio and the dynamic range are significantly better than with the analogue cameras. In addition, digital cameras can be spectrally and radiometrically calibrated. The use of these cameras required a rethinking in many places though. New data products were introduced. In the recent years, some activities took place that should lead to a better understanding of the cameras and the data produced by these cameras. Several projects, like the projects of the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) or EuroSDR (European Spatial Data Research), were conducted to test and compare the performance of the different cameras. In this paper the current DIN (Deutsches Institut fuer Normung - German Institute for Standardization) standards will be presented. These include the standard for digital cameras, the standard for ortho rectification, the standard for classification, and the standard for pan-sharpening. In addition, standards for the derivation of elevation models, the use of Radar / SAR, and image quality are in preparation. The OGC has indicated its interest in participating that development. The OGC has already published specifications in the field of photogrammetry and remote sensing. One goal of joint future work could be to merge these formerly independent developments and the joint development of a suite of implementation specifications for photogrammetry and remote sensing.

  19. The mass remote sensing image data management based on Oracle InterMedia

    Science.gov (United States)

    Zhao, Xi'an; Shi, Shaowei

    2013-07-01

    With the development of remote sensing technology, getting the image data more and more, how to apply and manage the mass image data safely and efficiently has become an urgent problem to be solved. According to the methods and characteristics of the mass remote sensing image data management and application, this paper puts forward to a new method that takes Oracle Call Interface and Oracle InterMedia to store the image data, and then takes this component to realize the system function modules. Finally, it successfully takes the VC and Oracle InterMedia component to realize the image data storage and management.

  20. Classification of Active Microwave and Passive Optical Data Based on Bayesian Theory and Mrf

    Science.gov (United States)

    Yu, F.; Li, H. T.; Han, Y. S.; Gu, H. Y.

    2012-08-01

    A classifier based on Bayesian theory and Markov random field (MRF) is presented to classify the active microwave and passive optical remote sensing data, which have demonstrated their respective advantages in inversion of surface soil moisture content. In the method, the VV, VH polarization of ASAR and all the 7 TM bands are taken as the input of the classifier to get the class labels of each pixel of the images. And the model is validated for the necessities of integration of TM and ASAR, it shows that, the total precision of classification in this paper is 89.4%. Comparing with the classification with single TM, the accuracy increase 11.5%, illustrating that synthesis of active and passive optical remote sensing data is efficient and potential in classification.

  1. CLASSIFICATION OF ACTIVE MICROWAVE AND PASSIVE OPTICAL DATA BASED ON BAYESIAN THEORY AND MRF

    Directory of Open Access Journals (Sweden)

    F. Yu

    2012-08-01

    Full Text Available A classifier based on Bayesian theory and Markov random field (MRF is presented to classify the active microwave and passive optical remote sensing data, which have demonstrated their respective advantages in inversion of surface soil moisture content. In the method, the VV, VH polarization of ASAR and all the 7 TM bands are taken as the input of the classifier to get the class labels of each pixel of the images. And the model is validated for the necessities of integration of TM and ASAR, it shows that, the total precision of classification in this paper is 89.4%. Comparing with the classification with single TM, the accuracy increase 11.5%, illustrating that synthesis of active and passive optical remote sensing data is efficient and potential in classification.

  2. Sparse representations and compressive sensing for imaging and vision

    CERN Document Server

    Patel, Vishal M

    2013-01-01

    Compressed sensing or compressive sensing is a new concept in signal processing where one measures a small number of non-adaptive linear combinations of the signal.  These measurements are usually much smaller than the number of samples that define the signal.  From these small numbers of measurements, the signal is then reconstructed by non-linear procedure.  Compressed sensing has recently emerged as a powerful tool for efficiently processing data in non-traditional ways.  In this book, we highlight some of the key mathematical insights underlying sparse representation and compressed sensing and illustrate the role of these theories in classical vision, imaging and biometrics problems.

  3. Information mining in remote sensing imagery

    Science.gov (United States)

    Li, Jiang

    The volume of remotely sensed imagery continues to grow at an enormous rate due to the advances in sensor technology, and our capability for collecting and storing images has greatly outpaced our ability to analyze and retrieve information from the images. This motivates us to develop image information mining techniques, which is very much an interdisciplinary endeavor drawing upon expertise in image processing, databases, information retrieval, machine learning, and software design. This dissertation proposes and implements an extensive remote sensing image information mining (ReSIM) system prototype for mining useful information implicitly stored in remote sensing imagery. The system consists of three modules: image processing subsystem, database subsystem, and visualization and graphical user interface (GUI) subsystem. Land cover and land use (LCLU) information corresponding to spectral characteristics is identified by supervised classification based on support vector machines (SVM) with automatic model selection, while textural features that characterize spatial information are extracted using Gabor wavelet coefficients. Within LCLU categories, textural features are clustered using an optimized k-means clustering approach to acquire search efficient space. The clusters are stored in an object-oriented database (OODB) with associated images indexed in an image database (IDB). A k-nearest neighbor search is performed using a query-by-example (QBE) approach. Furthermore, an automatic parametric contour tracing algorithm and an O(n) time piecewise linear polygonal approximation (PLPA) algorithm are developed for shape information mining of interesting objects within the image. A fuzzy object-oriented database based on the fuzzy object-oriented data (FOOD) model is developed to handle the fuzziness and uncertainty. Three specific applications are presented: integrated land cover and texture pattern mining, shape information mining for change detection of lakes, and

  4. Land cover classification of VHR airborne images for citrus grove identification

    Science.gov (United States)

    Amorós López, J.; Izquierdo Verdiguier, E.; Gómez Chova, L.; Muñoz Marí, J.; Rodríguez Barreiro, J. Z.; Camps Valls, G.; Calpe Maravilla, J.

    Managing land resources using remote sensing techniques is becoming a common practice. However, data analysis procedures should satisfy the high accuracy levels demanded by users (public or private companies and governments) in order to be extensively used. This paper presents a multi-stage classification scheme to update the citrus Geographical Information System (GIS) of the Comunidad Valenciana region (Spain). Spain is the first citrus fruit producer in Europe and the fourth in the world. In particular, citrus fruits represent 67% of the agricultural production in this region, with a total production of 4.24 million tons (campaign 2006-2007). The citrus GIS inventory, created in 2001, needs to be regularly updated in order to monitor changes quickly enough, and allow appropriate policy making and citrus production forecasting. Automatic methods are proposed in this work to facilitate this update, whose processing scheme is summarized as follows. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution aerial images (0.5 m). Next, several automatic classifiers (decision trees, artificial neural networks, and support vector machines) are trained and combined to improve the final classification accuracy. Finally, the citrus GIS is automatically updated if a high enough level of confidence, based on the agreement between classifiers, is achieved. This is the case for 85% of the parcels and accuracy results exceed 94%. The remaining parcels are classified by expert photo-interpreters in order to guarantee the high accuracy demanded by policy makers.

  5. Multifunctional PHPMA-Derived Polymer for Ratiometric pH Sensing, Fluorescence Imaging, and Magnetic Resonance Imaging.

    Science.gov (United States)

    Su, Fengyu; Agarwal, Shubhangi; Pan, Tingting; Qiao, Yuan; Zhang, Liqiang; Shi, Zhengwei; Kong, Xiangxing; Day, Kevin; Chen, Meiwan; Meldrum, Deirdre; Kodibagkar, Vikram D; Tian, Yanqing

    2018-01-17

    In this paper, we report synthesis and characterization of a novel multimodality (MRI/fluorescence) probe for pH sensing and imaging. A multifunctional polymer was derived from poly(N-(2-hydroxypropyl)methacrylamide) (PHPMA) and integrated with a naphthalimide-based-ratiometric fluorescence probe and a gadolinium-1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid complex (Gd-DOTA complex). The polymer was characterized using UV-vis absorption spectrophotometry, fluorescence spectrofluorophotometry, magnetic resonance imaging (MRI), and confocal microscopy for optical and MRI-based pH sensing and cellular imaging. In vitro labeling of macrophage J774 and esophageal CP-A cell lines shows the polymer's ability to be internalized in the cells. The transverse relaxation time (T 2 ) of the polymer was observed to be pH-dependent, whereas the spin-lattice relaxation time (T 1 ) was not. The pH probe in the polymer shows a strong fluorescence-based ratiometric pH response with emission window changes, exhibiting blue emission under acidic conditions and green emission under basic conditions, respectively. This study provides new materials with multimodalities for pH sensing and imaging.

  6. The Ilac-Project Supporting Ancient Coin Classification by Means of Image Analysis

    Science.gov (United States)

    Kavelar, A.; Zambanini, S.; Kampel, M.; Vondrovec, K.; Siegl, K.

    2013-07-01

    This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  7. THE ILAC-PROJECT: SUPPORTING ANCIENT COIN CLASSIFICATION BY MEANS OF IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    A. Kavelar

    2013-07-01

    Full Text Available This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  8. Unveiling Undercover Cropland Inside Forests Using Landscape Variables: A Supplement to Remote Sensing Image Classification.

    Science.gov (United States)

    Ayanu, Yohannes; Conrad, Christopher; Jentsch, Anke; Koellner, Thomas

    2015-01-01

    The worldwide demand for food has been increasing due to the rapidly growing global population, and agricultural lands have increased in extent to produce more food crops. The pattern of cropland varies among different regions depending on the traditional knowledge of farmers and availability of uncultivated land. Satellite images can be used to map cropland in open areas but have limitations for detecting undergrowth inside forests. Classification results are often biased and need to be supplemented with field observations. Undercover cropland inside forests in the Bale Mountains of Ethiopia was assessed using field observed percentage cover of land use/land cover classes, and topographic and location parameters. The most influential factors were identified using Boosted Regression Trees and used to map undercover cropland area. Elevation, slope, easterly aspect, distance to settlements, and distance to national park were found to be the most influential factors determining undercover cropland area. When there is very high demand for growing food crops, constrained under restricted rights for clearing forest, cultivation could take place within forests as an undercover. Further research on the impact of undercover cropland on ecosystem services and challenges in sustainable management is thus essential.

  9. Study on fractal characteristics of remote sensing image in the typical volcanic uranium metallogenic areas

    International Nuclear Information System (INIS)

    Pan Wei; Ni Guoqiang; Li Hanbo

    2010-01-01

    Computing Methods of fractal dimension and multifractal spectrum about the remote sensing image are briefly introduced. The fractal method is used to study the characteristics of remote sensing images in Xiangshan and Yuhuashan volcanic uranium metallogenic areas in southern China. The research results indicate that the Xiangshan basin in which lots of volcanic uranium deposits occur,is of bigger fractal dimension based on remote sensing image texture than that of the Yuhuashan basin in which two uranium ore occurrences exist, and the multifractal spectrum in the Xiangshan basin obviously leans to less singularity index than in the Yuhuashan basin. The relation of the fractal dimension and multifractal singularity of remote sensing image to uranium metallogeny are discussed. The fractal dimension and multifractal singularity index of remote sensing image may be used to predict the volcanic uranium metallogenic areas. (authors)

  10. Illumination invariant feature point matching for high-resolution planetary remote sensing images

    Science.gov (United States)

    Wu, Bo; Zeng, Hai; Hu, Han

    2018-03-01

    Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.

  11. Shift-invariant discrete wavelet transform analysis for retinal image classification.

    Science.gov (United States)

    Khademi, April; Krishnan, Sridhar

    2007-12-01

    This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.

  12. Mapping US Urban Extents from MODIS Data Using One-Class Classification Method

    Directory of Open Access Journals (Sweden)

    Bo Wan

    2015-08-01

    Full Text Available Urban areas are one of the most important components of human society. Their extents have been continuously growing during the last few decades. Accurate and timely measurements of the extents of urban areas can help in analyzing population densities and urban sprawls and in studying environmental issues related to urbanization. Urban extents detected from remotely sensed data are usually a by-product of land use classification results, and their interpretation requires a full understanding of land cover types. In this study, for the first time, we mapped urban extents in the continental United States using a novel one-class classification method, i.e., positive and unlabeled learning (PUL, with multi-temporal Moderate Resolution Imaging Spectroradiometer (MODIS data for the year 2010. The Defense Meteorological Satellite Program Operational Linescan System (DMSP-OLS night stable light data were used to calibrate the urban extents obtained from the one-class classification scheme. Our results demonstrated the effectiveness of the use of the PUL algorithm in mapping large-scale urban areas from coarse remote-sensing images, for the first time. The total accuracy of mapped urban areas was 92.9% and the kappa coefficient was 0.85. The use of DMSP-OLS night stable light data can significantly reduce false detection rates from bare land and cropland far from cities. Compared with traditional supervised classification methods, the one-class classification scheme can greatly reduce the effort involved in collecting training datasets, without losing predictive accuracy.

  13. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    Science.gov (United States)

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  14. Edge Detection from High Resolution Remote Sensing Images using Two-Dimensional log Gabor Filter in Frequency Domain

    International Nuclear Information System (INIS)

    Wang, K; Yu, T; Meng, Q Y; Wang, G K; Li, S P; Liu, S H

    2014-01-01

    Edges are vital features to describe the structural information of images, especially high spatial resolution remote sensing images. Edge features can be used to define the boundaries between different ground objects in high spatial resolution remote sensing images. Thus edge detection is important in the remote sensing image processing. Even though many different edge detection algorithms have been proposed, it is difficult to extract the edge features from high spatial resolution remote sensing image including complex ground objects. This paper introduces a novel method to detect edges from the high spatial resolution remote sensing image based on frequency domain. Firstly, the high spatial resolution remote sensing images are Fourier transformed to obtain the magnitude spectrum image (frequency image) by FFT. Then, the frequency spectrum is analyzed by using the radius and angle sampling. Finally, two-dimensional log Gabor filter with optimal parameters is designed according to the result of spectrum analysis. Finally, dot product between the result of Fourier transform and the log Gabor filter is inverse Fourier transformed to obtain the detections. The experimental result shows that the proposed algorithm can detect edge features from the high resolution remote sensing image commendably

  15. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification

    Directory of Open Access Journals (Sweden)

    Xinzheng Zhang

    2016-09-01

    Full Text Available Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ 1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance.

  16. Efficient HIK SVM learning for image classification.

    Science.gov (United States)

    Wu, Jianxin

    2012-10-01

    Histograms are used in almost every aspect of image processing and computer vision, from visual descriptors to image representations. Histogram intersection kernel (HIK) and support vector machine (SVM) classifiers are shown to be very effective in dealing with histograms. This paper presents contributions concerning HIK SVM for image classification. First, we propose intersection coordinate descent (ICD), a deterministic and scalable HIK SVM solver. ICD is much faster than, and has similar accuracies to, general purpose SVM solvers and other fast HIK SVM training methods. We also extend ICD to the efficient training of a broader family of kernels. Second, we show an important empirical observation that ICD is not sensitive to the C parameter in SVM, and we provide some theoretical analyses to explain this observation. ICD achieves high accuracies in many problems, using its default parameters. This is an attractive property for practitioners, because many image processing tasks are too large to choose SVM parameters using cross-validation.

  17. PROMISE: parallel-imaging and compressed-sensing reconstruction of multicontrast imaging using SharablE information.

    Science.gov (United States)

    Gong, Enhao; Huang, Feng; Ying, Kui; Wu, Wenchuan; Wang, Shi; Yuan, Chun

    2015-02-01

    A typical clinical MR examination includes multiple scans to acquire images with different contrasts for complementary diagnostic information. The multicontrast scheme requires long scanning time. The combination of partially parallel imaging and compressed sensing (CS-PPI) has been used to reconstruct accelerated scans. However, there are several unsolved problems in existing methods. The target of this work is to improve existing CS-PPI methods for multicontrast imaging, especially for two-dimensional imaging. If the same field of view is scanned in multicontrast imaging, there is significant amount of sharable information. It is proposed in this study to use manifold sharable information among multicontrast images to enhance CS-PPI in a sequential way. Coil sensitivity information and structure based adaptive regularization, which were extracted from previously reconstructed images, were applied to enhance the following reconstructions. The proposed method is called Parallel-imaging and compressed-sensing Reconstruction Of Multicontrast Imaging using SharablE information (PROMISE). Using L1 -SPIRiT as a CS-PPI example, results on multicontrast brain and carotid scans demonstrated that lower error level and better detail preservation can be achieved by exploiting manifold sharable information. Besides, the privilege of PROMISE still exists while there is interscan motion. Using the sharable information among multicontrast images can enhance CS-PPI with tolerance to motions. © 2014 Wiley Periodicals, Inc.

  18. Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method

    Science.gov (United States)

    Xin, L.

    2018-04-01

    Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.

  19. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  20. A Denoising Scheme for Randomly Clustered Noise Removal in ICCD Sensing Image

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2017-01-01

    Full Text Available An Intensified Charge-Coupled Device (ICCD image is captured by the ICCD image sensor in extremely low-light conditions. Its noise has two distinctive characteristics. (a Different from the independent identically distributed (i.i.d. noise in natural image, the noise in the ICCD sensing image is spatially clustered, which induces unexpected structure information; (b The pattern of the clustered noise is formed randomly. In this paper, we propose a denoising scheme to remove the randomly clustered noise in the ICCD sensing image. First, we decompose the image into non-overlapped patches and classify them into flat patches and structure patches according to if real structure information is included. Then, two denoising algorithms are designed for them, respectively. For each flat patch, we simulate multiple similar patches for it in pseudo-time domain and remove its noise by averaging all the simulated patches, considering that the structure information induced by the noise varies randomly over time. For each structure patch, we design a structure-preserved sparse coding algorithm to reconstruct the real structure information. It reconstructs each patch by describing it as a weighted summation of its neighboring patches and incorporating the weights into the sparse representation of the current patch. Based on all the reconstructed patches, we generate a reconstructed image. After that, we repeat the whole process by changing relevant parameters, considering that blocking artifacts exist in a single reconstructed image. Finally, we obtain the reconstructed image by merging all the generated images into one. Experiments are conducted on an ICCD sensing image dataset, which verifies its subjective performance in removing the randomly clustered noise and preserving the real structure information in the ICCD sensing image.

  1. Tongue Images Classification Based on Constrained High Dispersal Network

    Directory of Open Access Journals (Sweden)

    Dan Meng

    2017-01-01

    Full Text Available Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM. However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN, we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  2. Study of Image Analysis Algorithms for Segmentation, Feature Extraction and Classification of Cells

    Directory of Open Access Journals (Sweden)

    Margarita Gamarra

    2017-08-01

    Full Text Available Recent advances in microcopy and improvements in image processing algorithms have allowed the development of computer-assisted analytical approaches in cell identification. Several applications could be mentioned in this field: Cellular phenotype identification, disease detection and treatment, identifying virus entry in cells and virus classification; these applications could help to complement the opinion of medical experts. Although many surveys have been presented in medical image analysis, they focus mainly in tissues and organs and none of the surveys about image cells consider an analysis following the stages in the typical image processing: Segmentation, feature extraction and classification. The goal of this study is to provide comprehensive and critical analyses about the trends in each stage of cell image processing. In this paper, we present a literature survey about cell identification using different image processing techniques.

  3. Novelty detection for breast cancer image classification

    Science.gov (United States)

    Cichosz, Pawel; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold

    2016-09-01

    Using classification learning algorithms for medical applications may require not only refined model creation techniques and careful unbiased model evaluation, but also detecting the risk of misclassification at the time of model application. This is addressed by novelty detection, which identifies instances for which the training set is not sufficiently representative and for which it may be safer to restrain from classification and request a human expert diagnosis. The paper investigates two techniques for isolated instance identification, based on clustering and one-class support vector machines, which represent two different approaches to multidimensional outlier detection. The prediction quality for isolated instances in breast cancer image data is evaluated using the random forest algorithm and found to be substantially inferior to the prediction quality for non-isolated instances. Each of the two techniques is then used to create a novelty detection model which can be combined with a classification model and used at the time of prediction to detect instances for which the latter cannot be reliably applied. Novelty detection is demonstrated to improve random forest prediction quality and argued to deserve further investigation in medical applications.

  4. UAV Remote Sensing for Urban Vegetation Mapping Using Random Forest and Texture Analysis

    Directory of Open Access Journals (Sweden)

    Quanlong Feng

    2015-01-01

    Full Text Available Unmanned aerial vehicle (UAV remote sensing has great potential for vegetation mapping in complex urban landscapes due to the ultra-high resolution imagery acquired at low altitudes. Because of payload capacity restrictions, off-the-shelf digital cameras are widely used on medium and small sized UAVs. The limitation of low spectral resolution in digital cameras for vegetation mapping can be reduced by incorporating texture features and robust classifiers. Random Forest has been widely used in satellite remote sensing applications, but its usage in UAV image classification has not been well documented. The objectives of this paper were to propose a hybrid method using Random Forest and texture analysis to accurately differentiate land covers of urban vegetated areas, and analyze how classification accuracy changes with texture window size. Six least correlated second-order texture measures were calculated at nine different window sizes and added to original Red-Green-Blue (RGB images as ancillary data. A Random Forest classifier consisting of 200 decision trees was used for classification in the spectral-textural feature space. Results indicated the following: (1 Random Forest outperformed traditional Maximum Likelihood classifier and showed similar performance to object-based image analysis in urban vegetation classification; (2 the inclusion of texture features improved classification accuracy significantly; (3 classification accuracy followed an inverted U relationship with texture window size. The results demonstrate that UAV provides an efficient and ideal platform for urban vegetation mapping. The hybrid method proposed in this paper shows good performance in differentiating urban vegetation mapping. The drawbacks of off-the-shelf digital cameras can be reduced by adopting Random Forest and texture analysis at the same time.

  5. Remote Sensing Scene Classification Based on Convolutional Neural Networks Pre-Trained Using Attention-Guided Sparse Filters

    Directory of Open Access Journals (Sweden)

    Jingbo Chen

    2018-02-01

    Full Text Available Semantic-level land-use scene classification is a challenging problem, in which deep learning methods, e.g., convolutional neural networks (CNNs, have shown remarkable capacity. However, a lack of sufficient labeled images has proved a hindrance to increasing the land-use scene classification accuracy of CNNs. Aiming at this problem, this paper proposes a CNN pre-training method under the guidance of a human visual attention mechanism. Specifically, a computational visual attention model is used to automatically extract salient regions in unlabeled images. Then, sparse filters are adopted to learn features from these salient regions, with the learnt parameters used to initialize the convolutional layers of the CNN. Finally, the CNN is further fine-tuned on labeled images. Experiments are performed on the UCMerced and AID datasets, which show that when combined with a demonstrative CNN, our method can achieve 2.24% higher accuracy than a plain CNN and can obtain an overall accuracy of 92.43% when combined with AlexNet. The results indicate that the proposed method can effectively improve CNN performance using easy-to-access unlabeled images and thus will enhance the performance of land-use scene classification especially when a large-scale labeled dataset is unavailable.

  6. Aptamer-assembled nanomaterials for fluorescent sensing and imaging

    Science.gov (United States)

    Lu, Danqing; He, Lei; Zhang, Ge; Lv, Aiping; Wang, Ruowen; Zhang, Xiaobing; Tan, Weihong

    2017-01-01

    Aptamers, which are selected in vitro by a technology known as the systematic evolution of ligands by exponential enrichment (SELEX), represent a crucial recognition element in molecular sensing. With advantages such as good biocompatibility, facile functionalization, and special optical and physical properties, various nanomaterials can protect aptamers from enzymatic degradation and nonspecific binding in living systems and thus provide a preeminent platform for biochemical applications. Coupling aptamers with various nanomaterials offers many opportunities for developing highly sensitive and selective sensing systems. Here, we focus on the recent applications of aptamer-assembled nanomaterials in fluorescent sensing and imaging. Different types of nanomaterials are examined along with their advantages and disadvantages. Finally, we look toward the future of aptamer-assembled nanomaterials.

  7. Reduction of Topographic Effect for Curve Number Estimated from Remotely Sensed Imagery

    Science.gov (United States)

    Zhang, Wen-Yan; Lin, Chao-Yuan

    2016-04-01

    The Soil Conservation Service Curve Number (SCS-CN) method is commonly used in hydrology to estimate direct runoff volume. The CN is the empirical parameter which corresponding to land use/land cover, hydrologic soil group and antecedent soil moisture condition. In large watersheds with complex topography, satellite remote sensing is the appropriate approach to acquire the land use change information. However, the topographic effect have been usually found in the remotely sensed imageries and resulted in land use classification. This research selected summer and winter scenes of Landsat-5 TM during 2008 to classified land use in Chen-You-Lan Watershed, Taiwan. The b-correction, the empirical topographic correction method, was applied to Landsat-5 TM data. Land use were categorized using K-mean classification into 4 groups i.e. forest, grassland, agriculture and river. Accuracy assessment of image classification was performed with national land use map. The results showed that after topographic correction, the overall accuracy of classification was increased from 68.0% to 74.5%. The average CN estimated from remotely sensed imagery decreased from 48.69 to 45.35 where the average CN estimated from national LULC map was 44.11. Therefore, the topographic correction method was recommended to normalize the topographic effect from the satellite remote sensing data before estimating the CN.

  8. Incorporating Open Source Data for Bayesian Classification of Urban Land Use From VHR Stereo Images

    NARCIS (Netherlands)

    Li, Mengmeng; De Beurs, Kirsten M.; Stein, Alfred; Bijker, Wietske

    2017-01-01

    This study investigates the incorporation of open source data into a Bayesian classification of urban land use from very high resolution (VHR) stereo satellite images. The adopted classification framework starts from urban land cover classification, proceeds to building-type characterization, and

  9. Remote sensing data in Rangeland assessment and monitoring

    International Nuclear Information System (INIS)

    Hamid, Amna Ahmed; Ali, Mohamed M.

    1999-01-01

    The main objective of the paper is to illustrate the potential of remote sensing data in the study and monitoring of environmental changes in western Sudan where considerable part of the area is under rangeland use. Data from NOAA satellite AVHRR sensor as well as thematic mapper Tm was used to assess the environment of the area during 1982-1997. The AVHRR data was processed into vegetation index (NDVI) images. Image analysis and classification was done using image display and analysis (IDA) GIS method to study vegetation condition in time series. The obtained information from field observations. The result showed high correlation between the information the work concluded the followings: NDVI images and thematic mapper data proved to be efficient in environment change analysis. NOAA AVHRR satellite data can provide an early-warning indicator of an approaching disaster. Remote sensing integrated into a GIS can contribute effectively to improve land management through better understanding of environment variability.(Author)

  10. Low-cost multispectral imaging for remote sensing of lettuce health

    Science.gov (United States)

    Ren, David D. W.; Tripathi, Siddhant; Li, Larry K. B.

    2017-01-01

    In agricultural remote sensing, unmanned aerial vehicle (UAV) platforms offer many advantages over conventional satellite and full-scale airborne platforms. One of the most important advantages is their ability to capture high spatial resolution images (1-10 cm) on-demand and at different viewing angles. However, UAV platforms typically rely on the use of multiple cameras, which can be costly and difficult to operate. We present the development of a simple low-cost imaging system for remote sensing of crop health and demonstrate it on lettuce (Lactuca sativa) grown in Hong Kong. To identify the optimal vegetation index, we recorded images of both healthy and unhealthy lettuce, and used them as input in an expectation maximization cluster analysis with a Gaussian mixture model. Results from unsupervised and supervised clustering show that, among four widely used vegetation indices, the blue wide-dynamic range vegetation index is the most accurate. This study shows that it is readily possible to design and build a remote sensing system capable of determining the health status of lettuce at a reasonably low cost (lettuce growers.

  11. Watermarking-based protection of remote sensing images: requirements and possible solutions

    Science.gov (United States)

    Barni, Mauro; Bartolini, Franco; Cappellini, Vito; Magli, Enrico; Olmo, Gabriella

    2001-12-01

    Earth observation missions have recently attracted ag rowing interest form the scientific and industrial communities, mainly due to the large number of possible applications capable to exploit remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products from non-authorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred means of data exchange. A crucial issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: i) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection ii) analysis of the state-of-the-art, and performance evaluation of existing algorithms in terms of the requirements at the previous point.

  12. Classification in hyperspectral images by independent component analysis, segmented cross-validation and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    Beatriz Galindo-Prieto

    2018-02-01

    Full Text Available Independent component analysis combined with various strategies for cross-validation, uncertainty estimates by jack-knifing and critical Hotelling’s T2 limits estimation, proposed in this paper, is used for classification purposes in hyperspectral images. To the best of our knowledge, the combined approach of methods used in this paper has not been previously applied to hyperspectral imaging analysis for interpretation and classification in the literature. The data analysis performed here aims to distinguish between four different types of plastics, some of them containing brominated flame retardants, from their near infrared hyperspectral images. The results showed that the method approach used here can be successfully used for unsupervised classification. A comparison of validation approaches, especially leave-one-out cross-validation and regions of interest scheme validation is also evaluated.

  13. Fine-grained leukocyte classification with deep residual learning for microscopic images.

    Science.gov (United States)

    Qin, Feiwei; Gao, Nannan; Peng, Yong; Wu, Zizhao; Shen, Shuying; Grudtsin, Artur

    2018-08-01

    Leukocyte classification and cytometry have wide applications in medical domain, previous researches usually exploit machine learning techniques to classify leukocytes automatically. However, constrained by the past development of machine learning techniques, for example, extracting distinctive features from raw microscopic images are difficult, the widely used SVM classifier only has relative few parameters to tune, these methods cannot efficiently handle fine-grained classification cases when the white blood cells have up to 40 categories. Based on deep learning theory, a systematic study is conducted on finer leukocyte classification in this paper. A deep residual neural network based leukocyte classifier is constructed at first, which can imitate the domain expert's cell recognition process, and extract salient features robustly and automatically. Then the deep neural network classifier's topology is adjusted according to the prior knowledge of white blood cell test. After that the microscopic image dataset with almost one hundred thousand labeled leukocytes belonging to 40 categories is built, and combined training strategies are adopted to make the designed classifier has good generalization ability. The proposed deep residual neural network based classifier was tested on microscopic image dataset with 40 leukocyte categories. It achieves top-1 accuracy of 77.80%, top-5 accuracy of 98.75% during the training procedure. The average accuracy on the test set is nearly 76.84%. This paper presents a fine-grained leukocyte classification method for microscopic images, based on deep residual learning theory and medical domain knowledge. Experimental results validate the feasibility and effectiveness of our approach. Extended experiments support that the fine-grained leukocyte classifier could be used in real medical applications, assist doctors in diagnosing diseases, reduce human power significantly. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Crop Type Classification Using Vegetation Indices of RapidEye Imagery

    Science.gov (United States)

    Ustuner, M.; Sanli, F. B.; Abdikan, S.; Esetlili, M. T.; Kurucu, Y.

    2014-09-01

    Cutting-edge remote sensing technology has a significant role for managing the natural resources as well as the any other applications about the earth observation. Crop monitoring is the one of these applications since remote sensing provides us accurate, up-to-date and cost-effective information about the crop types at the different temporal and spatial resolution. In this study, the potential use of three different vegetation indices of RapidEye imagery on crop type classification as well as the effect of each indices on classification accuracy were investigated. The Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), and the Normalized Difference Red Edge Index (NDRE) are the three vegetation indices used in this study since all of these incorporated the near-infrared (NIR) band. RapidEye imagery is highly demanded and preferred for agricultural and forestry applications since it has red-edge and NIR bands. The study area is located in Aegean region of Turkey. Radial Basis Function (RBF) kernel was used here for the Support Vector Machines (SVMs) classification. Original bands of RapidEye imagery were excluded and classification was performed with only three vegetation indices. The contribution of each indices on image classification accuracy was also tested with single band classification. Highest classification accuracy of 87, 46 % was obtained using three vegetation indices. This obtained classification accuracy is higher than the classification accuracy of any dual-combination of these vegetation indices. Results demonstrate that NDRE has the highest contribution on classification accuracy compared to the other vegetation indices and the RapidEye imagery can get satisfactory results of classification accuracy without original bands.

  15. A Plane Target Detection Algorithm in Remote Sensing Images based on Deep Learning Network Technology

    Science.gov (United States)

    Shuxin, Li; Zhilong, Zhang; Biao, Li

    2018-01-01

    Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.

  16. HEp-2 cell image classification method based on very deep convolutional networks with small datasets

    Science.gov (United States)

    Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping

    2017-07-01

    Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.

  17. Probability Density Components Analysis: A New Approach to Treatment and Classification of SAR Images

    Directory of Open Access Journals (Sweden)

    Osmar Abílio de Carvalho Júnior

    2014-04-01

    Full Text Available Speckle noise (salt and pepper is inherent to synthetic aperture radar (SAR, which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA, a new alternative that combines filtering and frequency histogram to improve the classification procedure for the single-channel synthetic aperture radar (SAR images. This method was tested on L-band SAR data from the Advanced Land Observation System (ALOS Phased-Array Synthetic-Aperture Radar (PALSAR sensor. The study area is localized in the Brazilian Amazon rainforest, northern Rondônia State (municipality of Candeias do Jamari, containing forest and land use patterns. The proposed algorithm uses a moving window over the image, estimating the probability density curve in different image components. Therefore, a single input image generates an output with multi-components. Initially the multi-components should be treated by noise-reduction methods, such as maximum noise fraction (MNF or noise-adjusted principal components (NAPCs. Both methods enable reducing noise as well as the ordering of multi-component data in terms of the image quality. In this paper, the NAPC applied to multi-components provided large reductions in the noise levels, and the color composites considering the first NAPC enhance the classification of different surface features. In the spectral classification, the Spectral Correlation Mapper and Minimum Distance were used. The results obtained presented as similar to the visual interpretation of optical images from TM-Landsat and Google Maps.

  18. Super-Resolution Reconstruction of Remote Sensing Images Using Multifractal Analysis

    Directory of Open Access Journals (Sweden)

    Mao-Gui Hu

    2009-10-01

    Full Text Available Satellite remote sensing (RS is an important contributor to Earth observation, providing various kinds of imagery every day, but low spatial resolution remains a critical bottleneck in a lot of applications, restricting higher spatial resolution analysis (e.g., intraurban. In this study, a multifractal-based super-resolution reconstruction method is proposed to alleviate this problem. The multifractal characteristic is common in Nature. The self-similarity or self-affinity presented in the image is useful to estimate details at larger and smaller scales than the original. We first look for the presence of multifractal characteristics in the images. Then we estimate parameters of the information transfer function and noise of the low resolution image. Finally, a noise-free, spatial resolutionenhanced image is generated by a fractal coding-based denoising and downscaling method. The empirical case shows that the reconstructed super-resolution image performs well indetail enhancement. This method is not only useful for remote sensing in investigating Earth, but also for other images with multifractal characteristics.

  19. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    Science.gov (United States)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  20. Compressed sensing in imaging mass spectrometry

    International Nuclear Information System (INIS)

    Bartels, Andreas; Dülk, Patrick; Trede, Dennis; Alexandrov, Theodore; Maaß, Peter

    2013-01-01

    Imaging mass spectrometry (IMS) is a technique of analytical chemistry for spatially resolved, label-free and multipurpose analysis of biological samples that is able to detect the spatial distribution of hundreds of molecules in one experiment. The hyperspectral IMS data is typically generated by a mass spectrometer analyzing the surface of the sample. In this paper, we propose a compressed sensing approach to IMS which potentially allows for faster data acquisition by collecting only a part of the pixels in the hyperspectral image and reconstructing the full image from this data. We present an integrative approach to perform both peak-picking spectra and denoising m/z-images simultaneously, whereas the state of the art data analysis methods solve these problems separately. We provide a proof of the robustness of the recovery of both the spectra and individual channels of the hyperspectral image and propose an algorithm to solve our optimization problem which is based on proximal mappings. The paper concludes with the numerical reconstruction results for an IMS dataset of a rat brain coronal section. (paper)

  1. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    Science.gov (United States)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  2. Classification of underground pipe scanned images using feature extraction and neuro-fuzzy algorithm.

    Science.gov (United States)

    Sinha, S K; Karray, F

    2002-01-01

    Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.

  3. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    Science.gov (United States)

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  4. Microwave and millimeter-wave remote sensing for security applications

    CERN Document Server

    Nanzer, Jeffrey

    2012-01-01

    Microwave and millimeter-wave remote sensing techniques are fast becoming a necessity in many aspects of security as detection and classification of objects or intruders becomes more difficult. This groundbreaking resource offers you expert guidance in this burgeoning area. It provides you with a thorough treatment of the principles of microwave and millimeter-wave remote sensing for security applications, as well as practical coverage of the design of radiometer, radar, and imaging systems. You learn how to design active and passive sensors for intruder detection, concealed object detection,

  5. Agricultural crop mapping and classification by Landsat images to evaluate water use in the Lake Urmia basin, North-west Iran

    Science.gov (United States)

    Fazel, Nasim; Norouzi, Hamid; Madani, Kaveh; Kløve, Bjørn

    2016-04-01

    Lake Urmia, once one of the largest hypersaline lakes in the world has lost more than 90% of its surface body mainly due to the intensive expansion of agriculture, using more than 90% of all water in the region. Access to accurate and up-to-date information on the extent and distribution of individual crop types, associated with land use changes and practices, has significant value in intensively agricultural regions. Explicit information of croplands can be useful for sustainable water resources, land and agriculture planning and management. Remote sensing, has been proven to be a more cost-effective alternative to the traditional statistically-based ground surveys for crop coverage areas that are costly and provide insufficient information. Satellite images along with ground surveys can provide the necessary information of spatial coverage and spectral responses of croplands for sustainable agricultural management. This study strives to differentiate different crop types and agricultural practices to achieve a higher detailed crop map of the Lake Urmia basin. The mapping approach consists of a two-stage supervised classification of multi-temporal multi-spectral high resolution images obtained from Landsat imagery archive. Irrigated and non-irrigated croplands and orchards were separated from other major land covers (urban, ranges, bare-lands, and water) in the region by means of maximum Likelihood supervised classification method. The field data collected during 2015 and land use maps generated in 2007 and Google Earth comparisons were used to form a training data set to perform the supervised classification. In the second stage, non-agricultural lands were masked and the supervised classification was applied on the Landsat images stack to identify seven major croplands in the region (wheat and barley, beetroot, corn, sunflower, alfalfa, vineyards, and apple orchards). The obtained results can be of significant value to the Urmia Lake restoration efforts which

  6. A Remote Sensing Image Fusion Method based on adaptive dictionary learning

    Science.gov (United States)

    He, Tongdi; Che, Zongxi

    2018-01-01

    This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.

  7. Diffusion-weighted single shot echo planar imaging of colorectal cancer using a sensitivity-encoding technique

    International Nuclear Information System (INIS)

    Nasu, Katsuhiro; Kuroki, Yoshihumi; Murakami, Koji; Nawano, Shigeru; Kuroki, Seiko; Moriyama, Noriyuki

    2004-01-01

    We wanted to determine the feasibility of diffusion-weighted single shot echo planar imaging using a sensitivity encoding diffusion weighted imaging (SENSE-DWI) technique in depicting colorectal cancer. Forty-two patients with sigmoid colon cancer and rectal cancer, all proven pathologically, were examined on T2-turbo spin echo (TSE) and SENSE-DWI. No bowel preparation was performed before examination. The b-factors used in SENSE-DWI were zero and 1000 s/mm 2 . In 10 randomly selected cases, the images whose b-factors were 250 and 500 s/mm 2 were also obtained. The reduction factor of SENSE was 2.0 in all sequences. Two radiologists evaluated the obtained images from the viewpoints of tumor detectability, image distortion and misregistration of the tumors. The apparent diffusion coefficients (ADCs) of the tumors and urine in the urinary bladders in each patient were measured to evaluate the correlation between ADC and pathological classification of each tumor. All tumors were depicted hyperintensely on SENSE-DWI. Even though single shot echo planar imaging (EPI) was used, the image distortion and misregistration was quite pronounced because of simultaneous use of SENSE. On SENSE-DWI whose b-factor was 1000 s/mm 2 , the normal colon wall and feces were always hypointense and easily differentiated from the tumors. The mean ADC value of each tumor was 1.02±0.1 (x 10 -3 ) mm 2 /s. No overt correlation can be pointed out between ADC and pathological classification of each tumor. SENSE-DWI is a feasible method for depicting colorectal cancer. SENSE-DWI provides strong contrast among colorectal cancers, normal rectal wall and feces. (authors)

  8. Using texture analysis to improve per-pixel classification of very high resolution images for mapping plastic greenhouses

    Science.gov (United States)

    Agüera, Francisco; Aguilar, Fernando J.; Aguilar, Manuel A.

    The area occupied by plastic-covered greenhouses has undergone rapid growth in recent years, currently exceeding 500,000 ha worldwide. Due to the vast amount of input (water, fertilisers, fuel, etc.) required, and output of different agricultural wastes (vegetable, plastic, chemical, etc.), the environmental impact of this type of production system can be serious if not accompanied by sound and sustainable territorial planning. For this, the new generation of satellites which provide very high resolution imagery, such as QuickBird and IKONOS can be useful. In this study, one QuickBird and one IKONOS satellite image have been used to cover the same area under similar circumstances. The aim of this work was an exhaustive comparison of QuickBird vs. IKONOS images in land-cover detection. In terms of plastic greenhouse mapping, comparative tests were designed and implemented, each with separate objectives. Firstly, the Maximum Likelihood Classification (MLC) was applied using five different approaches combining R, G, B, NIR, and panchromatic bands. The combinations of the bands used, significantly influenced some of the indexes used to classify quality in this work. Furthermore, the quality classification of the QuickBird image was higher in all cases than that of the IKONOS image. Secondly, texture features derived from the panchromatic images at different window sizes and with different grey levels were added as a fifth band to the R, G, B, NIR images to carry out the MLC. The inclusion of texture information in the classification did not improve the classification quality. For classifications with texture information, the best accuracies were found in both images for mean and angular second moment texture parameters. The optimum window size in these texture parameters was 3×3 for IK images, while for QB images it depended on the quality index studied, but the optimum window size was around 15×15. With regard to the grey level, the optimum was 128. Thus, the

  9. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Directory of Open Access Journals (Sweden)

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  10. A new tool for supervised classification of satellite images available on web servers: Google Maps as a case study

    Science.gov (United States)

    García-Flores, Agustín.; Paz-Gallardo, Abel; Plaza, Antonio; Li, Jun

    2016-10-01

    This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.

  11. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    Science.gov (United States)

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  12. Detection of High-Density Crowds in Aerial Images Using Texture Classification

    Directory of Open Access Journals (Sweden)

    Oliver Meynberg

    2016-06-01

    Full Text Available Automatic crowd detection in aerial images is certainly a useful source of information to prevent crowd disasters in large complex scenarios of mass events. A number of publications employ regression-based methods for crowd counting and crowd density estimation. However, these methods work only when a correct manual count is available to serve as a reference. Therefore, it is the objective of this paper to detect high-density crowds in aerial images, where counting– or regression–based approaches would fail. We compare two texture–classification methodologies on a dataset of aerial image patches which are grouped into ranges of different crowd density. These methodologies are: (1 a Bag–of–words (BoW model with two alternative local features encoded as Improved Fisher Vectors and (2 features based on a Gabor filter bank. Our results show that a classifier using either BoW or Gabor features can detect crowded image regions with 97% classification accuracy. In our tests of four classes of different crowd-density ranges, BoW–based features have a 5%–12% better accuracy than Gabor.

  13. Convolutional neural network-based classification system design with compressed wireless sensor network images.

    Science.gov (United States)

    Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil

    2018-01-01

    With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.

  14. SPATIAL-SPECTRAL CLASSIFICATION BASED ON THE UNSUPERVISED CONVOLUTIONAL SPARSE AUTO-ENCODER FOR HYPERSPECTRAL REMOTE SENSING IMAGERY

    Directory of Open Access Journals (Sweden)

    X. Han

    2016-06-01

    Full Text Available Current hyperspectral remote sensing imagery spatial-spectral classification methods mainly consider concatenating the spectral information vectors and spatial information vectors together. However, the combined spatial-spectral information vectors may cause information loss and concatenation deficiency for the classification task. To efficiently represent the spatial-spectral feature information around the central pixel within a neighbourhood window, the unsupervised convolutional sparse auto-encoder (UCSAE with window-in-window selection strategy is proposed in this paper. Window-in-window selection strategy selects the sub-window spatial-spectral information for the spatial-spectral feature learning and extraction with the sparse auto-encoder (SAE. Convolution mechanism is applied after the SAE feature extraction stage with the SAE features upon the larger outer window. The UCSAE algorithm was validated by two common hyperspectral imagery (HSI datasets – Pavia University dataset and the Kennedy Space Centre (KSC dataset, which shows an improvement over the traditional hyperspectral spatial-spectral classification methods.

  15. Support for Implications of Compressive Sensing Concepts to Imaging Systems

    Science.gov (United States)

    2015-08-02

    Justin Romberg Georgia Tech jrom@ece.gatech.edu Emil Sidky University of Chicago sidky@uchicago.edu Michael Stenner MITRE mstenner@mitre.org Lei Tian...assessment of image quality. Michael Stenner Michael has broad interests in optical imaging, sensing, and communications, and is published in such

  16. Remote sensing models and methods for image processing

    CERN Document Server

    Schowengerdt, Robert A

    2007-01-01

    Remote sensing is a technology that engages electromagnetic sensors to measure and monitor changes in the earth's surface and atmosphere. Normally this is accomplished through the use of a satellite or aircraft. This book, in its 3rd edition, seamlessly connects the art and science of earth remote sensing with the latest interpretative tools and techniques of computer-aided image processing. Newly expanded and updated, this edition delivers more of the applied scientific theory and practical results that helped the previous editions earn wide acclaim and become classroom and industry standa

  17. Exploitation of geospatial techniques for monitoring metropolitan population growth and classification of landcover features

    International Nuclear Information System (INIS)

    Almas, A.S.; Rahim, C.A.

    2006-01-01

    The present research relates to the exploitation of Remote Sensing and GIS techniques for studying the metropolitan expansion and land use/ landcover classification of Lahore, the second largest city of Pakistan where urbanization is taking place at a striking rate with inadequate development of the requisite infrastructure. Such sprawl gives rise to the congestion, pollution and commuting time issues. The metropolitan expansion, based on growth direction and distance from the city centre, was observed for a period of about thirty years. The classification of the complex spatial assemblage of urban environment and its expanding precincts was done using the temporally spaced satellite images geo-referenced to a common coordinate system and census data. Spatial categorization of urban landscape involving densely populated residential areas, sparsely inhibited regions, bare soil patches, water bodies, vegetation, Parks, and mixed features was done with the help of satellite images. Resultantly, remote sensing and GIS techniques were found very efficient and effective for studying the metropolitan growth patterns along with the classification of urban features into prominent categories. In addition, census data augments the usefulness of spatial techniques for carrying out such studies. (author)

  18. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    International Nuclear Information System (INIS)

    Xiao Di; Cai Hong-Kun; Zheng Hong-Ying

    2015-01-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)

  19. Threshold selection for classification of MR brain images by clustering method

    Energy Technology Data Exchange (ETDEWEB)

    Moldovanu, Simona [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania); Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi (Romania); Obreja, Cristian; Moraru, Luminita, E-mail: luminita.moraru@ugal.ro [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania)

    2015-12-07

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  20. Influence of multi-source and multi-temporal remotely sensed and ancillary data on the accuracy of random forest classification of wetlands in northern Minnesota

    Science.gov (United States)

    Corcoran, Jennifer M.; Knight, Joseph F.; Gallant, Alisa L.

    2013-01-01

    Wetland mapping at the landscape scale using remotely sensed data requires both affordable data and an efficient accurate classification method. Random forest classification offers several advantages over traditional land cover classification techniques, including a bootstrapping technique to generate robust estimations of outliers in the training data, as well as the capability of measuring classification confidence. Though the random forest classifier can generate complex decision trees with a multitude of input data and still not run a high risk of over fitting, there is a great need to reduce computational and operational costs by including only key input data sets without sacrificing a significant level of accuracy. Our main questions for this study site in Northern Minnesota were: (1) how does classification accuracy and confidence of mapping wetlands compare using different remote sensing platforms and sets of input data; (2) what are the key input variables for accurate differentiation of upland, water, and wetlands, including wetland type; and (3) which datasets and seasonal imagery yield the best accuracy for wetland classification. Our results show the key input variables include terrain (elevation and curvature) and soils descriptors (hydric), along with an assortment of remotely sensed data collected in the spring (satellite visible, near infrared, and thermal bands; satellite normalized vegetation index and Tasseled Cap greenness and wetness; and horizontal-horizontal (HH) and horizontal-vertical (HV) polarization using L-band satellite radar). We undertook this exploratory analysis to inform decisions by natural resource managers charged with monitoring wetland ecosystems and to aid in designing a system for consistent operational mapping of wetlands across landscapes similar to those found in Northern Minnesota.

  1. Improved medical image modality classification using a combination of visual and textual features.

    Science.gov (United States)

    Dimitrovski, Ivica; Kocev, Dragi; Kitanovski, Ivan; Loskovska, Suzana; Džeroski, Sašo

    2015-01-01

    In this paper, we present the approach that we applied to the medical modality classification tasks at the ImageCLEF evaluation forum. More specifically, we used the modality classification databases from the ImageCLEF competitions in 2011, 2012 and 2013, described by four visual and one textual types of features, and combinations thereof. We used local binary patterns, color and edge directivity descriptors, fuzzy color and texture histogram and scale-invariant feature transform (and its variant opponentSIFT) as visual features and the standard bag-of-words textual representation coupled with TF-IDF weighting. The results from the extensive experimental evaluation identify the SIFT and opponentSIFT features as the best performing features for modality classification. Next, the low-level fusion of the visual features improves the predictive performance of the classifiers. This is because the different features are able to capture different aspects of an image, their combination offering a more complete representation of the visual content in an image. Moreover, adding textual features further increases the predictive performance. Finally, the results obtained with our approach are the best results reported on these databases so far. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Dissimilarity Application in Digitized Mammographic Images Classification

    Directory of Open Access Journals (Sweden)

    Ubaldo Bottigli

    2006-06-01

    Full Text Available Purpose of this work is the development of an automatic classification system which could be useful for radiologists in the investigation of breast cancer. The software has been designed in the framework of the MAGIC-5 collaboration. In the traditional way of learning from examples of objects the classifiers are built in a feature space. However, an alternative ways can be found by constructing decision rules on dissimilarity (distance representations. In such a recognition process a new object is described by its distances to (a subset of the training samples. The use of the dissimilarities is especially of interest when features are difficult to obtain or when they have a little discriminative power. In the automatic classification system the suspicious regions with high probability to include a lesion are extracted from the image as regions of interest (ROIs. Each ROI is characterized by some features extracted from co-occurrence matrix containing spatial statistics information on ROI pixel grey tones. A dissimilarity representation of these features is made before the classification. A feed-forward neural network is employed to distinguish pathological records, from non-pathological ones by the new features. The results obtained in terms of sensitivity and specificity will be presented.

  3. Advanced Land Use Classification for Nigeriasat-1 Image of Lake Chad Basin

    Science.gov (United States)

    Babamaaji, R.; Park, C.; Lee, J.

    2009-12-01

    Lake Chad is a shrinking freshwater lake that has been significantly reduced to about 1/20 of its original size in the 1960’s. The severe draughts in 1970’s and 1980’s and following overexploitations of water resulted in the shortage of surface water in the lake and the surrounding rivers. Ground water resources are in scarcity too as ground water recharge is mostly made by soil infiltration through soil and land cover, but this surface cover is now experiencing siltation and expansion of wetland with invasive species. Large changes in land use and water management practices have taken place in the last 50 years including: removal of water from river systems for irrigation and consumption, degradation of forage land by overgrazing, deforestation, replacing natural ecosystems with mono-cultures, and construction of dams. Therefore, understanding the change of land use and its characteristics must be a first step to find how such changes disturb the water cycle around the lake and affect the shrinkage of the lake. Before any useful thematic information can be extracted from remote sensing data, a land cover classification system has to be developed to obtain the classes of interest. A combination of classification systems used by Global land cover, Water Resources eAtlass and Lake Chad Basin Commission gave rise to 7 land cover classes comprising of - Cropland, vegetation, grassland, water body, shrub-land, farmland ( mostly irrigated) and bareland (i.e. clear land). Supervised Maximum likelihood classification method was used with 15 reference points per class chosen. At the end of the classification, the overall accuracy is 93.33%. Producer’s accuracy for vegetation is 40% compare to the user’s accuracy that is 66.67 %. The reason is that the vegetation is similar to shrub land, it is very hard to differentiate between the vegetation and other plants, and therefore, most of the vegetation is classified as shrub land. Most of the waterbodies are occupied

  4. Classification of Herbaceous Vegetation Using Airborne Hyperspectral Imagery

    Directory of Open Access Journals (Sweden)

    Péter Burai

    2015-02-01

    Full Text Available Alkali landscapes hold an extremely fine-scale mosaic of several vegetation types, thus it seems challenging to separate these classes by remote sensing. Our aim was to test the applicability of different image classification methods of hyperspectral data in this complex situation. To reach the highest classification accuracy, we tested traditional image classifiers (maximum likelihood classifier—MLC, machine learning algorithms (support vector machine—SVM, random forest—RF and feature extraction (minimum noise fraction (MNF-transformation on training datasets of different sizes. Digital images were acquired from an AISA EAGLE II hyperspectral sensor of 128 contiguous bands (400–1000 nm, a spectral sampling of 5 nm bandwidth and a ground pixel size of 1 m. For the classification, we established twenty vegetation classes based on the dominant species, canopy height, and total vegetation cover. Image classification was applied to the original and MNF (minimum noise fraction transformed dataset with various training sample sizes between 10 and 30 pixels. In order to select the optimal number of the transformed features, we applied SVM, RF and MLC classification to 2–15 MNF transformed bands. In the case of the original bands, SVM and RF classifiers provided high accuracy irrespective of the number of the training pixels. We found that SVM and RF produced the best accuracy when using the first nine MNF transformed bands; involving further features did not increase classification accuracy. SVM and RF provided high accuracies with the transformed bands, especially in the case of the aggregated groups. Even MLC provided high accuracy with 30 training pixels (80.78%, but the use of a smaller training dataset (10 training pixels significantly reduced the accuracy of classification (52.56%. Our results suggest that in alkali landscapes, the application of SVM is a feasible solution, as it provided the highest accuracies compared to RF and MLC

  5. Novel fluorescent carbonic nanomaterials for sensing and imaging

    International Nuclear Information System (INIS)

    Demchenko, Alexander P; Dekaliuk, Mariia O

    2013-01-01

    Small brightly fluorescent carbon nanoparticles have emerged as a new class of materials important for sensing and imaging applications. We analyze comparatively the properties of nanodiamonds, graphene and graphene oxide ‘dots’, of modified carbon nanotubes and of diverse carbon nanoparticles known as ‘C-dots’ obtained by different methods. The mechanisms of their light absorption and luminescence emission are still unresolved and the arguments are presented for their common origin. Regarding present and potential applications, we provide critical comparison with the other types of fluorescence reporters, such as organic dyes and semiconductor quantum dots. Their most prospective applications in sensing (based on the changes of intensity, FRET and lifetime) and in imaging technologies on the level of living cells and whole bodies are overviewed. The possibilities for design on their basis of multifunctional nanocomposites on a broader scale of theranostics are outlined. (topical review)

  6. Accessory cardiac bronchus: Proposed imaging classification on multidetector CT

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Min; Kim, Young Tong; Han, Jong Kyu; Jou, Sung Shick [Dept. of Radiology, Soonchunhyang University College of Medicine, Cheonan Hospital, Cheonan (Korea, Republic of)

    2016-02-15

    To propose the classification of accessory cardiac bronchus (ACB) based on imaging using multidetector computed tomography (MDCT), and evaluate follow-up changes of ACB. This study included 58 patients diagnosed as ACB since 9 years, using MDCT. We analyzed the types, division locations and division directions of ACB, and also evaluated changes on follow-up. We identified two main types of ACB: blind-end (51.7%) and lobule (48.3%). The blind-end ACB was further classified into three subtypes: blunt (70%), pointy (23.3%) and saccular (6.7%). The lobule ACB was also further classified into three subtypes: complete (46.4%), incomplete (28.6%) and rudimentary (25%). Division location to the upper half bronchus intermedius (79.3%) and medial direction (60.3%) were the most common in all patients. The difference in division direction was statistically significant between the blind-end and lobule types (p = 0.019). Peribronchial soft tissue was found in five cases. One calcification case was identified in the lobule type. During follow-up, ACB had disappeared in two cases of the blind-end type and in one case of the rudimentary subtype. The proposed classification of ACB based on imaging, and the follow-up CT, helped us to understand the various imaging features of ACB.

  7. Classification in medical images using adaptive metric k-NN

    Science.gov (United States)

    Chen, C.; Chernoff, K.; Karemore, G.; Lo, P.; Nielsen, M.; Lauze, F.

    2010-03-01

    The performance of the k-nearest neighborhoods (k-NN) classifier is highly dependent on the distance metric used to identify the k nearest neighbors of the query points. The standard Euclidean distance is commonly used in practice. This paper investigates the performance of k-NN classifier with respect to different adaptive metrics in the context of medical imaging. We propose using adaptive metrics such that the structure of the data is better described, introducing some unsupervised learning knowledge in k-NN. We investigated four different metrics are estimated: a theoretical metric based on the assumption that images are drawn from Brownian Image Model (BIM), the normalized metric based on variance of the data, the empirical metric is based on the empirical covariance matrix of the unlabeled data, and an optimized metric obtained by minimizing the classification error. The spectral structure of the empirical covariance also leads to Principal Component Analysis (PCA) performed on it which results the subspace metrics. The metrics are evaluated on two data sets: lateral X-rays of the lumbar aortic/spine region, where we use k-NN for performing abdominal aorta calcification detection; and mammograms, where we use k-NN for breast cancer risk assessment. The results show that appropriate choice of metric can improve classification.

  8. A change detection method for remote sensing image based on LBP and SURF feature

    Science.gov (United States)

    Hu, Lei; Yang, Hao; Li, Jin; Zhang, Yun

    2018-04-01

    Finding the change in multi-temporal remote sensing image is important in many the image application. Because of the infection of climate and illumination, the texture of the ground object is more stable relative to the gray in high-resolution remote sensing image. And the texture features of Local Binary Patterns (LBP) and Speeded Up Robust Features (SURF) are outstanding in extracting speed and illumination invariance. A method of change detection for matched remote sensing image pair is present, which compares the similarity by LBP and SURF to detect the change and unchanged of the block after blocking the image. And region growing is adopted to process the block edge zone. The experiment results show that the method can endure some illumination change and slight texture change of the ground object.

  9. Robust through-the-wall radar image classification using a target-model alignment procedure.

    Science.gov (United States)

    Smith, Graeme E; Mobasseri, Bijan G

    2012-02-01

    A through-the-wall radar image (TWRI) bears little resemblance to the equivalent optical image, making it difficult to interpret. To maximize the intelligence that may be obtained, it is desirable to automate the classification of targets in the image to support human operators. This paper presents a technique for classifying stationary targets based on the high-range resolution profile (HRRP) extracted from 3-D TWRIs. The dependence of the image on the target location is discussed using a system point spread function (PSF) approach. It is shown that the position dependence will cause a classifier to fail, unless the image to be classified is aligned to a classifier-training location. A target image alignment technique based on deconvolution of the image with the system PSF is proposed. Comparison of the aligned target images with measured images shows the alignment process introducing normalized mean squared error (NMSE) ≤ 9%. The HRRP extracted from aligned target images are classified using a naive Bayesian classifier supported by principal component analysis. The classifier is tested using a real TWRI of canonical targets behind a concrete wall and shown to obtain correct classification rates ≥ 97%. © 2011 IEEE

  10. Classification of breast cancer histology images using Convolutional Neural Networks.

    Directory of Open Access Journals (Sweden)

    Teresa Araújo

    Full Text Available Breast cancer is one of the main causes of cancer death worldwide. The diagnosis of biopsy tissue with hematoxylin and eosin stained images is non-trivial and specialists often disagree on the final diagnosis. Computer-aided Diagnosis systems contribute to reduce the cost and increase the efficiency of this process. Conventional classification approaches rely on feature extraction methods designed for a specific problem based on field-knowledge. To overcome the many difficulties of the feature-based approaches, deep learning methods are becoming important alternatives. A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs is proposed. Images are classified in four classes, normal tissue, benign lesion, in situ carcinoma and invasive carcinoma, and in two classes, carcinoma and non-carcinoma. The architecture of the network is designed to retrieve information at different scales, including both nuclei and overall tissue organization. This design allows the extension of the proposed system to whole-slide histology images. The features extracted by the CNN are also used for training a Support Vector Machine classifier. Accuracies of 77.8% for four class and 83.3% for carcinoma/non-carcinoma are achieved. The sensitivity of our method for cancer cases is 95.6%.

  11. Feature Importance for Human Epithelial (HEp-2 Cell Image Classification

    Directory of Open Access Journals (Sweden)

    Vibha Gupta

    2018-02-01

    Full Text Available Indirect Immuno-Fluorescence (IIF microscopy imaging of human epithelial (HEp-2 cells is a popular method for diagnosing autoimmune diseases. Considering large data volumes, computer-aided diagnosis (CAD systems, based on image-based classification, can help in terms of time, effort, and reliability of diagnosis. Such approaches are based on extracting some representative features from the images. This work explores the selection of the most distinctive features for HEp-2 cell images using various feature selection (FS methods. Considering that there is no single universally optimal feature selection technique, we also propose hybridization of one class of FS methods (filter methods. Furthermore, the notion of variable importance for ranking features, provided by another type of approaches (embedded methods such as Random forest, Random uniform forest is exploited to select a good subset of features from a large set, such that addition of new features does not increase classification accuracy. In this work, we have also, with great consideration, designed class-specific features to capture morphological visual traits of the cell patterns. We perform various experiments and discussions to demonstrate the effectiveness of FS methods along with proposed and a standard feature set. We achieve state-of-the-art performance even with small number of features, obtained after the feature selection.

  12. Distance-Based Image Classification: Generalizing to New Classes at Near Zero Cost

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G.

    2013-01-01

    We study large-scale image classification methods that can incorporate new classes and training images continuously over time at negligible cost. To this end, we consider two distance-based classifiers, the k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers, and introduce a new

  13. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    Science.gov (United States)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  14. Classification of Diabetic Macular Edema and Its Stages Using Color Fundus Image

    Institute of Scientific and Technical Information of China (English)

    Muhammad Zubair; Shoab A. Khan; Ubaid Ullah Yasin

    2014-01-01

    Diabetic macular edema (DME) is a retinal thickening involving the center of the macula. It is one of the serious eye diseases which affects the central vision and can lead to partial or even complete visual loss. The only cure is timely diagnosis, prevention, and treatment of the disease. This paper presents an automated system for the diagnosis and classification of DME using color fundus image. In the proposed technique, first the optic disc is removed by applying some preprocessing steps. The preprocessed image is then passed through a classifier for segmentation of the image to detect exudates. The classifier uses dynamic thresholding technique by using some input parameters of the image. The stage classification is done on the basis of anearly treatment diabetic retinopathy study (ETDRS) given criteria to assess the severity of disease. The proposed technique gives a sensitivity, specificity, and accuracy of 98.27%, 96.58%, and 96.54%, respectively on publically available database.

  15. Deep machine learning based Image classification in hard disk drive manufacturing (Conference Presentation)

    Science.gov (United States)

    Rana, Narender; Chien, Chester

    2018-03-01

    A key sensor element in a Hard Disk Drive (HDD) is the read-write head device. The device is complex 3D shape and its fabrication requires over thousand process steps with many of them being various types of image inspection and critical dimension (CD) metrology steps. In order to have high yield of devices across a wafer, very tight inspection and metrology specifications are implemented. Many images are collected on a wafer and inspected for various types of defects and in CD metrology the quality of image impacts the CD measurements. Metrology noise need to be minimized in CD metrology to get better estimate of the process related variations for implementing robust process controls. Though there are specialized tools available for defect inspection and review allowing classification and statistics. However, due to unavailability of such advanced tools or other reasons, many times images need to be manually inspected. SEM Image inspection and CD-SEM metrology tools are different tools differing in software as well. SEM Image inspection and CD-SEM metrology tools are separate tools differing in software and purpose. There have been cases where a significant numbers of CD-SEM images are blurred or have some artefact and there is a need for image inspection along with the CD measurement. Tool may not report a practical metric highlighting the quality of image. Not filtering CD from these blurred images will add metrology noise to the CD measurement. An image classifier can be helpful here for filtering such data. This paper presents the use of artificial intelligence in classifying the SEM images. Deep machine learning is used to train a neural network which is then used to classify the new images as blurred and not blurred. Figure 1 shows the image blur artefact and contingency table of classification results from the trained deep neural network. Prediction accuracy of 94.9 % was achieved in the first model. Paper covers other such applications of the deep neural

  16. Electrical impedance tomography-based sensing skin for quantitative imaging of damage in concrete

    International Nuclear Information System (INIS)

    Hallaji, Milad; Pour-Ghaz, Mohammad; Seppänen, Aku

    2014-01-01

    This paper outlines the development of a large-area sensing skin for damage detection in concrete structures. The developed sensing skin consists of a thin layer of electrically conductive copper paint that is applied to the surface of the concrete. Cracking of the concrete substrate results in the rupture of the sensing skin, decreasing its electrical conductivity locally. The decrease in conductivity is detected with electrical impedance tomography (EIT) imaging. In previous works, electrically based sensing skins have provided only qualitative information on the damage on the substrate surface. In this paper, we study whether quantitative imaging of the damage is possible. We utilize application-specific models and computational methods in the image reconstruction, including a total variation (TV) prior model for the damage and an approximate correction of the modeling errors caused by the inhomogeneity of the painted sensing skin. The developed damage detection method is tested experimentally by applying the sensing skin to polymeric substrates and a reinforced concrete beam under four-point bending. In all test cases, the EIT-based sensing skin provides quantitative information on cracks and/or other damages on the substrate surface: featuring a very low conductivity in the damage locations, and a reliable indication of the lengths and shapes of the cracks. The results strongly support the applicability of the painted EIT-based sensing skin for damage detection in reinforced concrete elements and other substrates. (paper)

  17. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  18. Classification of High Spatial Resolution, Hyperspectral Remote Sensing Imagery of the Little Miami River Watershed in Southwest Ohio, USA (Final)

    Science.gov (United States)

    EPA announced the availability of the final report,Classification of High Spatial Resolution, Hyperspectral Remote Sensing Imagery of the Little Miami River Watershed in Southwest Ohio, USA . This report and associated land use/land cover (LULC) coverage is the result o...

  19. Correlation of bone quality in radiographic images with clinical bone quality classification

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun Woo; Huh, Kyung Hoe; Kim, Jeong Hwa; Yi, Won Jin; Heo, Min Suk; Lee, Sam Sun; Choi, Soon Chul [Seoul National University, Seoul (Korea, Republic of); Park, Kwan Soo [Inje University, Seoul (Korea, Republic of)

    2006-03-15

    To investigate the validity of digital image processing on panoramic radiographs in estimating bone quality before endosseous dental implant installation by correlating bone quality in radiographic images with clinical bone quality classification. An experienced surgeon assessed and classified bone quality for implant sites with tactile sensation at the time of implant placement. Including fractal dimension eighteen morphologic features of trabecular pattern were examined in each anatomical sites on panoramic radiographs. Finally bone quality of 67 implant sites were evaluated in 42 patients. Pearson correlation analysis showed that three morphologic parameters had weak linear negative correlation with clinical bone quality classification showing correlation coefficients of -0.276, -0.280, and -0.289, respectively (p<0.05). And other three morphologic parameters had obvious linear negative correlation with clinical bone quality classification showing correlation coefficients of -0.346, -0.488, and -0.343 respectively (p<0.05). Fractal dimension also had a linear correlating with clinical bone quality classification with correlation coefficients -0.506 significantly (P<0.05). This study suggests that fractal and morphometric analysis using digital panoramic radiographs can be used to evaluate bone quality for implant recipient sites.

  20. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    Science.gov (United States)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.